论文标题

重建任务找到通用获胜门票

Reconstruction Task Finds Universal Winning Tickets

论文作者

Li, Ruichen, Li, Binghui, Qian, Qi, Wang, Liwei

论文摘要

修剪训练良好的神经网络有效地实现了计算机视觉制度的有前途的准确性效率折衷。但是,大多数现有的修剪算法仅集中在源域上定义的分类任务上。与原始模型的强可传递性不同,修剪的网络很难转移到复杂的下游任务,例如对象检测ARXIV:Arch-ive/2012.04643。在本文中,我们表明,图像级预处理任务不能为各种下游任务修剪模型。为了减轻此问题,我们将图像重建(一个像素级任务)介绍给传统的修剪框架。具体而言,根据原始模型对自动编码器进行了训练,然后通过自动编码器和分类损失优化了修剪过程。关于基准下游任务的实证研究表明,所提出的方法可以明确胜过最先进的结果。

Pruning well-trained neural networks is effective to achieve a promising accuracy-efficiency trade-off in computer vision regimes. However, most of existing pruning algorithms only focus on the classification task defined on the source domain. Different from the strong transferability of the original model, a pruned network is hard to transfer to complicated downstream tasks such as object detection arXiv:arch-ive/2012.04643. In this paper, we show that the image-level pretrain task is not capable of pruning models for diverse downstream tasks. To mitigate this problem, we introduce image reconstruction, a pixel-level task, into the traditional pruning framework. Concretely, an autoencoder is trained based on the original model, and then the pruning process is optimized with both autoencoder and classification losses. The empirical study on benchmark downstream tasks shows that the proposed method can outperform state-of-the-art results explicitly.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源