论文标题
学习数据驱动的低级近似值的稀疏性和随机性
Learning Sparsity and Randomness for Data-driven Low Rank Approximation
论文作者
论文摘要
基于学习的低级近似算法可以通过草图矩阵显着提高随机低级近似值的性能。通过从基于学习的算法中的草图矩阵的学习值和固定的非零位置,这些矩阵可以显着降低低级近似值的测试误差。但是,仍然没有很好的方法来学习非零的位置,也没有克服分布绩效的损失。在这项工作中,我们介绍了两种新方法学习稀疏性和学习随机性,试图学习更好的稀疏模式,并为草图矩阵的价值增加随机性。这两种方法可以直接使用基于学习的算法来应用。我们的实验表明,这两种方法可以改善以前基于学习的算法的性能,以实现测试误差和分布外测试错误,而不会增加过多的复杂性。
Learning-based low rank approximation algorithms can significantly improve the performance of randomized low rank approximation with sketch matrix. With the learned value and fixed non-zero positions for sketch matrices from learning-based algorithms, these matrices can reduce the test error of low rank approximation significantly. However, there is still no good method to learn non-zero positions as well as overcome the out-of-distribution performance loss. In this work, we introduce two new methods Learning Sparsity and Learning Randomness which try to learn a better sparsity patterns and add randomness to the value of sketch matrix. These two methods can be applied with any learning-based algorithms which use sketch matrix directly. Our experiments show that these two methods can improve the performance of previous learning-based algorithm for both test error and out-of-distribution test error without adding too much complexity.