论文标题
部分可观测时空混沌系统的无模型预测
NAR-Former: Neural Architecture Representation Learning towards Holistic Attributes Prediction
论文作者
论文摘要
随着在真实应用中深入了解深度学习模型的广泛采用,越来越需要建模和学习神经网络本身的表示。这些模型可用于估计不同神经网络体系结构(例如准确性和延迟)的属性,而无需运行实际的培训或推理任务。在本文中,我们提出了一个神经体系结构表示模型,该模型可用于整体估算这些属性。具体而言,我们首先提出一个简单有效的令牌仪,以将神经网络的操作和拓扑信息编码为单个序列。然后,我们设计了一个多阶段融合变压器,以从转换的序列构建紧凑的向量表示。对于有效的模型培训,我们进一步提出了信息流量一致性的增强,并相应地设计了体系结构损失,与以前的随机增强策略相比,它带来了更多的增强样本的好处。 NAS-BENCH-101,NAS-BENCH-201,DARTS搜索空间和NNLQP的实验结果表明,我们提出的框架可用于预测细胞体系结构和整个深层神经网络的上述潜伏期和准确性属性,并实现有希望的表现。代码可从https://github.com/yuny220/nar-former获得。
With the wide and deep adoption of deep learning models in real applications, there is an increasing need to model and learn the representations of the neural networks themselves. These models can be used to estimate attributes of different neural network architectures such as the accuracy and latency, without running the actual training or inference tasks. In this paper, we propose a neural architecture representation model that can be used to estimate these attributes holistically. Specifically, we first propose a simple and effective tokenizer to encode both the operation and topology information of a neural network into a single sequence. Then, we design a multi-stage fusion transformer to build a compact vector representation from the converted sequence. For efficient model training, we further propose an information flow consistency augmentation and correspondingly design an architecture consistency loss, which brings more benefits with less augmentation samples compared with previous random augmentation strategies. Experiment results on NAS-Bench-101, NAS-Bench-201, DARTS search space and NNLQP show that our proposed framework can be used to predict the aforementioned latency and accuracy attributes of both cell architectures and whole deep neural networks, and achieves promising performance. Code is available at https://github.com/yuny220/NAR-Former.