论文标题
分布稳健的差异最小化:$ f $ divergence社区的紧密差异界限
Distributionally Robust Variance Minimization: Tight Variance Bounds over $f$-Divergence Neighborhoods
论文作者
论文摘要
分布鲁棒优化(DRO)是一个广泛使用的框架,用于在存在随机性和模型形式不确定性的情况下优化目标功能。许多DRO问题的实际解决方案的关键步骤是对所选模型歧义集的优化进行了可访问的重新印度,该模型集通常是无限的。在目标函数是期望值的情况下,以前的工作解决了此问题。在本文中,我们研究目标函数,这些功能是期望值和差异惩罚项的总和。我们证明,在$ f $ divergence社区上相应的差异性DRO问题可以作为有限维凸优化问题进行重新校正。该结果还为方差提供了紧密的不确定性量化范围。
Distributionally robust optimization (DRO) is a widely used framework for optimizing objective functionals in the presence of both randomness and model-form uncertainty. A key step in the practical solution of many DRO problems is a tractable reformulation of the optimization over the chosen model ambiguity set, which is generally infinite dimensional. Previous works have solved this problem in the case where the objective functional is an expected value. In this paper we study objective functionals that are the sum of an expected value and a variance penalty term. We prove that the corresponding variance-penalized DRO problem over an $f$-divergence neighborhood can be reformulated as a finite-dimensional convex optimization problem. This result also provides tight uncertainty quantification bounds on the variance.