论文标题
概述频率原理/频谱偏见
Overview frequency principle/spectral bias in deep learning
论文作者
论文摘要
理解深度学习越来越突出,因为它越来越渗透到行业和科学中。近年来,傅立叶分析的研究线通过显示深神经网络(DNNS)的训练行为的频率原理(F原理或光谱偏见)(DNNS)在训练过程中通常从低频到高频的拟合功能,从而在这种神奇的“黑匣子”上阐明了灯。首先,F-Principle是通过一维合成数据证明的,然后在高维真实数据集中进行了验证。一系列作品随后提高了F原理的有效性。这种低频隐式偏见揭示了神经网络在学习低频功能及其在学习高频功能方面的缺乏。这种理解激发了在实际问题中基于DNN的算法的设计,解释了在各种情况下出现的实验现象,并从频率的角度进一步进一步研究了深度学习的研究。尽管不完整,但我们提供了F-Principle的概述,并为将来的研究提出了一些开放问题。
Understanding deep learning is increasingly emergent as it penetrates more and more into industry and science. In recent years, a research line from Fourier analysis sheds lights on this magical "black box" by showing a Frequency Principle (F-Principle or spectral bias) of the training behavior of deep neural networks (DNNs) -- DNNs often fit functions from low to high frequency during the training. The F-Principle is first demonstrated by onedimensional synthetic data followed by the verification in high-dimensional real datasets. A series of works subsequently enhance the validity of the F-Principle. This low-frequency implicit bias reveals the strength of neural network in learning low-frequency functions as well as its deficiency in learning high-frequency functions. Such understanding inspires the design of DNN-based algorithms in practical problems, explains experimental phenomena emerging in various scenarios, and further advances the study of deep learning from the frequency perspective. Although incomplete, we provide an overview of F-Principle and propose some open problems for future research.