论文标题

机器对随机森林的学习

Machine Unlearning for Random Forests

论文作者

Brophy, Jonathan, Lowd, Daniel

论文摘要

响应用户数据删除请求,删除嘈杂的示例或删除损坏的培训数据只是希望从机器学习(ML)模型中删除实例的一些原因。但是,从ML模型中有效删除此数据通常很困难。在本文中,我们引入了启用数据删除(DARE)森林,这是一种随机森林的一种变体,可以使用最小的再培训来删除训练数据。森林中每个Dare树的模型更新都是准确的,这意味着从DARE模型中删除实例的模型与从头开始更新数据的重新搜索完全相同。 Dare Trees使用随机性和缓存来提高数据删除。 Dare树的上层使用随机节点,可随机选择分裂属性和阈值。这些节点很少需要更新,因为它们仅最大程度地取决于数据。在较低级别的情况下,选择拆分以贪婪地优化分裂标准,例如Gini索引或互信息。在每个叶子上的每个节点上的dare树缓存统计信息和培训数据,因此在删除数据时仅更新必要的子树。对于数值属性,贪婪节点在阈值的随机子集上进行了优化,因此它们可以在近似最佳阈值的同时维持统计信息。通过调整贪婪节点考虑的阈值数量以及随机节点的数量,Dare Trees可以在更准确的预测和更有效的更新之间进行权衡。 在13个现实世界数据集和一个合成数据集的实验中,我们发现Dare Forest删除了比从头开始的重新验证更快的数据级,同时牺牲了几乎没有预测能力。

Responding to user data deletion requests, removing noisy examples, or deleting corrupted training data are just a few reasons for wanting to delete instances from a machine learning (ML) model. However, efficiently removing this data from an ML model is generally difficult. In this paper, we introduce data removal-enabled (DaRE) forests, a variant of random forests that enables the removal of training data with minimal retraining. Model updates for each DaRE tree in the forest are exact, meaning that removing instances from a DaRE model yields exactly the same model as retraining from scratch on updated data. DaRE trees use randomness and caching to make data deletion efficient. The upper levels of DaRE trees use random nodes, which choose split attributes and thresholds uniformly at random. These nodes rarely require updates because they only minimally depend on the data. At the lower levels, splits are chosen to greedily optimize a split criterion such as Gini index or mutual information. DaRE trees cache statistics at each node and training data at each leaf, so that only the necessary subtrees are updated as data is removed. For numerical attributes, greedy nodes optimize over a random subset of thresholds, so that they can maintain statistics while approximating the optimal threshold. By adjusting the number of thresholds considered for greedy nodes, and the number of random nodes, DaRE trees can trade off between more accurate predictions and more efficient updates. In experiments on 13 real-world datasets and one synthetic dataset, we find DaRE forests delete data orders of magnitude faster than retraining from scratch while sacrificing little to no predictive power.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源