论文标题

随机矢量功能链路网络用于函数近似

Random Vector Functional Link Networks for Function Approximation on Manifolds

论文作者

Needell, Deanna, Nelson, Aaron A., Saab, Rayan, Salanevich, Palina, Schavemaker, Olov

论文摘要

众所周知,喂养前馈神经网络的学习速度很慢,并且在深度学习应用中呈现了几十年的瓶颈。例如,在所有网络参数必须迭代调整时,基于梯度的学习算法被广泛用于训练神经网络,往往会缓慢起作用。为了解决这个问题,研究人员和从业人员都尝试引入随机性来减少学习要求。基于Igelnik和Pao的原始结构,具有随机输入层的重量和偏见的单层神经网络在实践中取得了成功,但是缺乏必要的理论理由。在本文中,我们开始填补这一理论差距。我们提供了一个(校正的)严格证明,即Igelnik和PAO结构是用于紧凑型域上连续功能的通用近似器,近似误差渐近地衰减,例如$ o(1/\ sqrt {n})$,用于网络节点的$ n $ n $ n $ n $ n $。然后,我们将此结果扩展到非反应设置,证明人们可以在$ n $的情况下实现任何所需的近似误差,这足够大。我们进一步适应了这种随机神经网络结构,以近似欧几里得空间的平滑,紧凑的亚曼叶量的功能,从而在渐近和非催化形式的理论保证中提供了理论保证。最后,我们通过数值实验说明了我们在歧管上的结果。

The learning speed of feed-forward neural networks is notoriously slow and has presented a bottleneck in deep learning applications for several decades. For instance, gradient-based learning algorithms, which are used extensively to train neural networks, tend to work slowly when all of the network parameters must be iteratively tuned. To counter this, both researchers and practitioners have tried introducing randomness to reduce the learning requirement. Based on the original construction of Igelnik and Pao, single layer neural-networks with random input-to-hidden layer weights and biases have seen success in practice, but the necessary theoretical justification is lacking. In this paper, we begin to fill this theoretical gap. We provide a (corrected) rigorous proof that the Igelnik and Pao construction is a universal approximator for continuous functions on compact domains, with approximation error decaying asymptotically like $O(1/\sqrt{n})$ for the number $n$ of network nodes. We then extend this result to the non-asymptotic setting, proving that one can achieve any desired approximation error with high probability provided $n$ is sufficiently large. We further adapt this randomized neural network architecture to approximate functions on smooth, compact submanifolds of Euclidean space, providing theoretical guarantees in both the asymptotic and non-asymptotic forms. Finally, we illustrate our results on manifolds with numerical experiments.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源