论文标题

一种协变量改编的一步方法

A One-step Approach to Covariate Shift Adaptation

论文作者

Zhang, Tianyi, Yamane, Ikko, Lu, Nan, Sugiyama, Masashi

论文摘要

在许多机器学习方案中,默认假设是训练和测试样本是从相同的概率分布中绘制的。但是,由于环境的非平稳性或样本选择中的偏见,这种假设经常在现实世界中违反。在这项工作中,我们考虑了一种称为协变量变化的普遍设置,其中输入分布在训练阶段和测试阶段之间有所不同,而给定输入的输出的条件分布保持不变。大多数现有的协变量改编方法是两步方法,该方法首先计算重要性权重,然后进行重要的经验风险最小化。在本文中,我们提出了一种新颖的一步方法,该方法通过最大程度地减少测试风险的上限,共同学习预测模型和相关权重。我们理论上分析了提出的方法并提供了概括误差。我们还经验证明了所提出的方法的有效性。

A default assumption in many machine learning scenarios is that the training and test samples are drawn from the same probability distribution. However, such an assumption is often violated in the real world due to non-stationarity of the environment or bias in sample selection. In this work, we consider a prevalent setting called covariate shift, where the input distribution differs between the training and test stages while the conditional distribution of the output given the input remains unchanged. Most of the existing methods for covariate shift adaptation are two-step approaches, which first calculate the importance weights and then conduct importance-weighted empirical risk minimization. In this paper, we propose a novel one-step approach that jointly learns the predictive model and the associated weights in one optimization by minimizing an upper bound of the test risk. We theoretically analyze the proposed method and provide a generalization error bound. We also empirically demonstrate the effectiveness of the proposed method.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源