论文标题

数据驱动的分布在强大优化的数据驱动的一般瓦斯汀框架:障碍和应用

A General Wasserstein Framework for Data-driven Distributionally Robust Optimization: Tractability and Applications

论文作者

Li, Jonathan Yu-Meng, Mao, Tiantian

论文摘要

数据驱动的分布在强大的优化方面是一个最近的新兴范式,旨在找到由样本数据驱动但受到保护误差的解决方案。一种越来越流行的方法,称为Wasserstein分布在强大的优化(DRO)中,通过应用Wasserstein Metric来构建以经验分布为中心的球并找到与来自球中最对抗性分布的溶液来实现这一目标。在本文中,我们提出了一个一般框架,用于研究Wasserstein指标的不同选择,并指出现有选择的局限性。特别是,从数据驱动的角度来看,鉴于其不保守的性质,选择了高阶的Wasserstein度量,但从稳健性的角度来看,这种选择不再适用于许多实际关注的重型分布。我们表明,我们的框架可以解决这种看似不可避免的权衡,其中引入了新的Wasserstein指标,称为Coherent Wasserstein指标。像Wasserstein Dro一样,使用连贯的Wasserstein指标在分布方面进行了强大的优化,称为广义Wasserstein分布在强大的优化(GW-DRO),具有所有理想的性能保证:有限样本保证,渐近性一致性和计算障碍。 GW-DRO中最糟糕的期望问题通常是一个非convex优化问题,但是我们提供了新的分析来证明其障碍性而不依赖共同的偶性方案。如本文所示,我们的框架提供了一个富有成果的机会来设计新颖的Wasserstein DRO模型,这些模型可以在各种情况下(例如运营管理,金融和机器学习)应用。

Data-driven distributionally robust optimization is a recently emerging paradigm aimed at finding a solution that is driven by sample data but is protected against sampling errors. An increasingly popular approach, known as Wasserstein distributionally robust optimization (DRO), achieves this by applying the Wasserstein metric to construct a ball centred at the empirical distribution and finding a solution that performs well against the most adversarial distribution from the ball. In this paper, we present a general framework for studying different choices of a Wasserstein metric and point out the limitation of the existing choices. In particular, while choosing a Wasserstein metric of a higher order is desirable from a data-driven perspective, given its less conservative nature, such a choice comes with a high price from a robustness perspective - it is no longer applicable to many heavy-tailed distributions of practical concern. We show that this seemingly inevitable trade-off can be resolved by our framework, where a new class of Wasserstein metrics, called coherent Wasserstein metrics, is introduced. Like Wasserstein DRO, distributionally robust optimization using the coherent Wasserstein metrics, termed generalized Wasserstein distributionally robust optimization (GW-DRO), has all the desirable performance guarantees: finite-sample guarantee, asymptotic consistency, and computational tractability. The worst-case expectation problem in GW-DRO is in general a nonconvex optimization problem, yet we provide new analysis to prove its tractability without relying on the common duality scheme. Our framework, as shown in this paper, offers a fruitful opportunity to design novel Wasserstein DRO models that can be applied in various contexts such as operations management, finance, and machine learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源