论文标题
高平行性类似于无监督特征学习的类似于尖峰神经网络
High-parallelism Inception-like Spiking Neural Networks for Unsupervised Feature Learning
论文作者
论文摘要
尖峰神经网络(SNN)是由大脑启发的,由事件驱动的机器学习算法,在生产超高能量高效的硬件方面已被广泛认可。在现有的SNN中,基于突触可塑性(尤其是峰值依赖性可塑性(STDP))的无监督的SNN被认为在模仿生物学大脑的学习过程中具有巨大的潜力。然而,现有的基于STDP的SNN在受限的学习能力和/或缓慢的学习速度方面存在局限性。大多数基于STDP的SNN都采用了慢学习的完全连接(FC)体系结构,并使用了基于次优的投票方案进行尖峰解码。在本文中,我们通过以下方式克服了这些局限性:1)高 - 平行性网络体系结构的设计,灵感来自人工神经网络中的启动模块(ANN); 2)使用全部投票(VFA)解码层作为替代基于标准的基于投票的SPIKE解码方案,以减少尖峰解码中的信息损失,3)提出的一种自适应复激活(重置)机制,通过增强SPIKIKing活动来加速SNN的学习。我们对两个已建立的基准数据集(MNIST/EMNIST)的实验结果表明,与广泛使用的FC体系结构和更先进的本地连接(LC)体系结构相比,我们的网络体系结构取得了较高的性能,我们的SNN实现了竞争性的结果,而最不受欢迎的snns却具有最高的snns(95.64%/80.11%/80.11%/80.11%的效果),并且具有竞争力的效率,并且是50.11%的精确效果。防御硬件损坏的鲁棒性。我们的SNN仅通过数百次训练迭代才能实现出色的分类精度,并且随机破坏大量突触或神经元只会导致可忽略的性能降解。
Spiking Neural Networks (SNNs) are brain-inspired, event-driven machine learning algorithms that have been widely recognized in producing ultra-high-energy-efficient hardware. Among existing SNNs, unsupervised SNNs based on synaptic plasticity, especially Spike-Timing-Dependent Plasticity (STDP), are considered to have great potential in imitating the learning process of the biological brain. Nevertheless, the existing STDP-based SNNs have limitations in constrained learning capability and/or slow learning speed. Most STDP-based SNNs adopted a slow-learning Fully-Connected (FC) architecture and used a sub-optimal vote-based scheme for spike decoding. In this paper, we overcome these limitations with: 1) a design of high-parallelism network architecture, inspired by the Inception module in Artificial Neural Networks (ANNs); 2) use of a Vote-for-All (VFA) decoding layer as a replacement to the standard vote-based spike decoding scheme, to reduce the information loss in spike decoding and, 3) a proposed adaptive repolarization (resetting) mechanism that accelerates SNNs' learning by enhancing spiking activities. Our experimental results on two established benchmark datasets (MNIST/EMNIST) show that our network architecture resulted in superior performance compared to the widely used FC architecture and a more advanced Locally-Connected (LC) architecture, and that our SNN achieved competitive results with state-of-the-art unsupervised SNNs (95.64%/80.11% accuracy on the MNIST/EMNISE dataset) while having superior learning efficiency and robustness against hardware damage. Our SNN achieved great classification accuracy with only hundreds of training iterations, and random destruction of large numbers of synapses or neurons only led to negligible performance degradation.