论文标题

Pavlov学习机

Pavlov Learning Machines

论文作者

Agliari, Elena, Aquaro, Miriam, Barra, Adriano, Fachechi, Alberto, Marullo, Chiara

论文摘要

众所周知,HEBB的学习探索了Pavlov的古典条件,而前者在过去几十年中进行了广泛的建模(例如,Hopfield模型和无数的主题变化),因为后者的建模在很大程度上保持了很大的尚未达到的状态。此外,完全缺乏这两个支柱之间的桥梁。这个目标的主要困难置于所涉及的信息的本质上不同的尺度上:Pavlov的理论是关于(概念}的相关性,这些相关性(动态地)存储在突触矩阵中,这是由狗和铃铛主演的著名实验所体现的;相反,HEBB的理论是关于相邻神经元对之间的相关性,如著名的陈述{\ em神经元一起发射汇聚在一起}所总结。在本文中,我们依赖于随机过程理论,并通过Langevin方程对神经和突触动力学进行建模,以证明 - 只要我们保持神经元和突触的时间标准在很大程度上分裂 - Pavlov机制可以自发地发生,并最终产生恢复Hebbian kernelel核心的突触权重。

As well known, Hebb's learning traces its origin in Pavlov's Classical Conditioning, however, while the former has been extensively modelled in the past decades (e.g., by Hopfield model and countless variations on theme), as for the latter modelling has remained largely unaddressed so far; further, a bridge between these two pillars is totally lacking. The main difficulty towards this goal lays in the intrinsically different scales of the information involved: Pavlov's theory is about correlations among \emph{concepts} that are (dynamically) stored in the synaptic matrix as exemplified by the celebrated experiment starring a dog and a ring bell; conversely, Hebb's theory is about correlations among pairs of adjacent neurons as summarized by the famous statement {\em neurons that fire together wire together}. In this paper we rely on stochastic-process theory and model neural and synaptic dynamics via Langevin equations, to prove that -- as long as we keep neurons' and synapses' timescales largely split -- Pavlov mechanism spontaneously takes place and ultimately gives rise to synaptic weights that recover the Hebbian kernel.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源