论文标题

城堡:通过辅助因果图发现正规化

CASTLE: Regularization via Auxiliary Causal Graph Discovery

论文作者

Kyono, Trent, Zhang, Yao, van der Schaar, Mihaela

论文摘要

正则化改善了监督模型对样本外数据的概括。先前的工作表明,在因果方向上的预测(因果关系的影响)导致测试误差较低,而不是反作用方向。但是,现有的正则化方法是因果关系的不可知论。我们介绍了因果结构学习(城堡)正则化,并提议通过共同学习变量之间的因果关系来正规化神经网络。城堡将因果关系定向的附属图(DAG)学习为嵌入神经网络输入层中的邻接矩阵,从而促进了发现最佳预测因子的发现。此外,城堡有效地仅重建具有因果关系邻居的因果DAG中的特征,而基于重建的正规化器次优地重建所有输入功能。我们为我们的方法提供了一种理论概括,并在多种合成和真实的公开数据集上进行了实验,这表明城堡始终导致与其他流行的基准正规机构相比,城堡始终导致更好的样本外预测。

Regularization improves generalization of supervised models to out-of-sample data. Prior works have shown that prediction in the causal direction (effect from cause) results in lower testing error than the anti-causal direction. However, existing regularization methods are agnostic of causality. We introduce Causal Structure Learning (CASTLE) regularization and propose to regularize a neural network by jointly learning the causal relationships between variables. CASTLE learns the causal directed acyclical graph (DAG) as an adjacency matrix embedded in the neural network's input layers, thereby facilitating the discovery of optimal predictors. Furthermore, CASTLE efficiently reconstructs only the features in the causal DAG that have a causal neighbor, whereas reconstruction-based regularizers suboptimally reconstruct all input features. We provide a theoretical generalization bound for our approach and conduct experiments on a plethora of synthetic and real publicly available datasets demonstrating that CASTLE consistently leads to better out-of-sample predictions as compared to other popular benchmark regularizers.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源