论文标题

一种用于培训物理受限的神经网络的双二聚体方法

A Dual-Dimer Method for Training Physics-Constrained Neural Networks with Minimax Architecture

论文作者

Liu, Dehao, Wang, Yan

论文摘要

数据稀疏性是训练机器学习工具(例如用于工程和科学应用的神经网络)的常见问题,在该工程和科学应用中,实验和模拟很昂贵。最近开发了物理受限的神经网络(PCNN),以减少所需的训练数据。但是,数据和物理约束的不同损失的权重在PCNN中进行了经验调整。在本文中,提出了一个新的具有Minimax体系结构(PCNN-MM)的新物理受限的神经网络,以便可以系统地调整不同损失的权重。 PCNN-MM的训练正在搜索目标函数的高阶鞍点。开发了一种称为双二聚体方法的新型鞍点搜索算法。已经证明,双二聚体方法在非convex-nonconcave函数上比梯度下降方法在计算上更有效,并提供了其他特征值信息来验证搜索结果。传热示例还表明,PCNN-MMS的收敛速度比传统PCNN的收敛速度快。

Data sparsity is a common issue to train machine learning tools such as neural networks for engineering and scientific applications, where experiments and simulations are expensive. Recently physics-constrained neural networks (PCNNs) were developed to reduce the required amount of training data. However, the weights of different losses from data and physical constraints are adjusted empirically in PCNNs. In this paper, a new physics-constrained neural network with the minimax architecture (PCNN-MM) is proposed so that the weights of different losses can be adjusted systematically. The training of the PCNN-MM is searching the high-order saddle points of the objective function. A novel saddle point search algorithm called Dual-Dimer method is developed. It is demonstrated that the Dual-Dimer method is computationally more efficient than the gradient descent ascent method for nonconvex-nonconcave functions and provides additional eigenvalue information to verify search results. A heat transfer example also shows that the convergence of PCNN-MMs is faster than that of traditional PCNNs.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源