论文标题

高维神经功能设计,用于降低培训成本的层次

High-dimensional Neural Feature Design for Layer-wise Reduction of Training Cost

论文作者

Javid, Alireza M., Venkitaraman, Arun, Skoglund, Mikael, Chatterjee, Saikat

论文摘要

我们通过将特征向量映射到每一层的较高维空间来设计基于RELU的多层神经网络。我们在每一层中设计重量矩阵,以确保随着层数的增加而降低训练成本。如果凸成本最小化,则在较高维空间中对目标的线性投影会导致较低的训练成本。 $ \ ell_2 $ -norm凸约限制用于最小化,以减少概括错误并避免过度拟合。网络的正则高参数是在分析上得出的,以确保训练成本的单调降低,因此,它消除了对每一层中找到正则高参数的需求。我们表明,所提出的体系结构是规范性的,并提供了可逆的特征向量,因此可以用来降低采用线性投影来估算目标的任何其他学习方法的训练成本。

We design a ReLU-based multilayer neural network by mapping the feature vectors to a higher dimensional space in every layer. We design the weight matrices in every layer to ensure a reduction of the training cost as the number of layers increases. Linear projection to the target in the higher dimensional space leads to a lower training cost if a convex cost is minimized. An $\ell_2$-norm convex constraint is used in the minimization to reduce the generalization error and avoid overfitting. The regularization hyperparameters of the network are derived analytically to guarantee a monotonic decrement of the training cost, and therefore, it eliminates the need for cross-validation to find the regularization hyperparameter in each layer. We show that the proposed architecture is norm-preserving and provides an invertible feature vector, and therefore, can be used to reduce the training cost of any other learning method which employs linear projection to estimate the target.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源