论文标题
基准在神经形态硬件上进行深度尖峰神经网络
Benchmarking Deep Spiking Neural Networks on Neuromorphic Hardware
论文作者
论文摘要
随着越来越多的基于事件的神经形态硬件系统正在大学和行业中开发,因此越来越需要通过特定领域的措施评估其性能。在这项工作中,我们使用将预训练的非加速器转换为尖峰神经网络的方法来评估性能损失并衡量三种神经形态硬件系统(Brainscales,Spikey,Spinnaker)和CPU(NEST)和CPU/GPU(GEMN)的三种神经形态硬件系统(Brainscales,Spikey,Spinnaker)和常见的分组框架的能量全力。对于模拟硬件,我们进一步应用了一种称为硬件训练的重新训练技术来应对设备不匹配。该分析是针对五个不同网络进行的,包括通过使用神经体系结构搜索框架自动优化发现的三个网络。我们证明,对于数字实施,转换损失通常低于百分之一,而对于较低的能源每推动力成本的好处来说,模拟系统中等地位。
With more and more event-based neuromorphic hardware systems being developed at universities and in industry, there is a growing need for assessing their performance with domain specific measures. In this work, we use the methodology of converting pre-trained non-spiking to spiking neural networks to evaluate the performance loss and measure the energy-per-inference for three neuromorphic hardware systems (BrainScaleS, Spikey, SpiNNaker) and common simulation frameworks for CPU (NEST) and CPU/GPU (GeNN). For analog hardware we further apply a re-training technique known as hardware-in-the-loop training to cope with device mismatch. This analysis is performed for five different networks, including three networks that have been found by an automated optimization with a neural architecture search framework. We demonstrate that the conversion loss is usually below one percent for digital implementations, and moderately higher for analog systems with the benefit of much lower energy-per-inference costs.