论文标题

使用分段近似迈向无损二元卷积神经网络

Towards Lossless Binary Convolutional Neural Networks Using Piecewise Approximation

论文作者

Zhu, Baozhou, Al-Ars, Zaid, Pan, Wei

论文摘要

二进制卷积神经网络(CNN)可以显着减少算术操作的数量和存储器存储的大小,这使得在移动或嵌入式系统上部署CNN更有前途。但是,对于现代体系结构和像ImageNet(例如ImageNet)的大规模数据集,单个和多个二元CNN的准确性降解是不可接受的。在本文中,我们提出了多个二进制CNN的分段近似方案(PA)方案,该方案通过有效近似精确的权重和激活来降低准确性损失,并保持位置操作的并行性以确保效率。与以前的方法不同,所提出的PA方案片段在零件上进行了全部精确的权重和激活,并以缩放系数为近似。我们对具有不同深度的RESNET的实现可以降低TOP-1和TOP-5分类精度差距,而将其完全精确度达到约1.0%。从下采样层的二线化中受益,我们提出的PA-RESNET50比具有4个权重和5个激活碱基的单个二进制CNN所需的记忆使用量和两倍的拖鞋。 PA方案还可以推广到具有与Resnet相似的近似功率的其他体系结构,例如使用二进制卷积的其他任务。代码和预估计的模型将公开可用。

Binary Convolutional Neural Networks (CNNs) can significantly reduce the number of arithmetic operations and the size of memory storage, which makes the deployment of CNNs on mobile or embedded systems more promising. However, the accuracy degradation of single and multiple binary CNNs is unacceptable for modern architectures and large scale datasets like ImageNet. In this paper, we proposed a Piecewise Approximation (PA) scheme for multiple binary CNNs which lessens accuracy loss by approximating full precision weights and activations efficiently and maintains parallelism of bitwise operations to guarantee efficiency. Unlike previous approaches, the proposed PA scheme segments piece-wisely the full precision weights and activations, and approximates each piece with a scaling coefficient. Our implementation on ResNet with different depths on ImageNet can reduce both Top-1 and Top-5 classification accuracy gap compared with full precision to approximately 1.0%. Benefited from the binarization of the downsampling layer, our proposed PA-ResNet50 requires less memory usage and two times Flops than single binary CNNs with 4 weights and 5 activations bases. The PA scheme can also generalize to other architectures like DenseNet and MobileNet with similar approximation power as ResNet which is promising for other tasks using binary convolutions. The code and pretrained models will be publicly available.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源