论文标题

消息传递神经过程

Message Passing Neural Processes

论文作者

Day, Ben, Cangea, Cătălina, Jamasb, Arian R., Liò, Pietro

论文摘要

神经过程(NP)是强大而灵活的模型,可以在表示随机过程时纳入不确定性,同时保持线性时间复杂。但是,NP通过汇总上下文点的独立表示而产生潜在描述,并且缺乏利用许多数据集中存在的关系信息的能力。这使NP在随机过程主要由社区规则(例如蜂窝自动机(CA))下的设置中无效,并且在关系信息仍然未使用的任何任务中限制了性能。我们通过引入消息传递神经过程(MPNP)来解决这一缺点,这是明确利用模型中关系结构的第一类NP。我们的评估表明,MPNP以较低的采样率,在现有基准和新提出的CA和CORA支线的任务上蓬勃发展。我们进一步报告了对基于密度的CA规则集的强有力的概括,并在挑战任意标志性和很少的学习设置方面取得了重大收益。

Neural Processes (NPs) are powerful and flexible models able to incorporate uncertainty when representing stochastic processes, while maintaining a linear time complexity. However, NPs produce a latent description by aggregating independent representations of context points and lack the ability to exploit relational information present in many datasets. This renders NPs ineffective in settings where the stochastic process is primarily governed by neighbourhood rules, such as cellular automata (CA), and limits performance for any task where relational information remains unused. We address this shortcoming by introducing Message Passing Neural Processes (MPNPs), the first class of NPs that explicitly makes use of relational structure within the model. Our evaluation shows that MPNPs thrive at lower sampling rates, on existing benchmarks and newly-proposed CA and Cora-Branched tasks. We further report strong generalisation over density-based CA rule-sets and significant gains in challenging arbitrary-labelling and few-shot learning setups.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源