论文标题

GHFP:逐渐硬过滤器修剪

GHFP: Gradually Hard Filter Pruning

论文作者

Cai, Linhang, An, Zhulin, Xu, Yongjun

论文摘要

过滤器修剪被广泛用于减少深度学习的计算,从而使深度神经网络(DNNS)在资源有限的设备中的部署。常规的硬过滤器修剪(HFP)方法将修剪过滤器归零并停止更新它们,从而减少了模型的搜索空间。相反,软滤清器修剪(SFP)简单地将修剪过滤器归零,在以下训练时期内保持更新,从而保持网络的容量。但是,由于其较大的搜索空间,SFP及其变体的收敛速度比HFP慢得多。我们的问题是是否可以将基于SFP的方法和HFP组合在一起,以实现更好的性能并加快收敛性。首先,我们概括了基于SFP的方法和HFP来分析其特征。然后,我们提出了一种逐渐硬过滤器修剪(GHFP)方法,以在训练和修剪过程中平稳从基于SFP的方法转换为HFP,从而在首先保持了较大的搜索空间,从而逐渐降低了模型的容量以确保中等收敛速度。 CIFAR-10/100的实验结果表明,我们的方法实现了最新的性能。

Filter pruning is widely used to reduce the computation of deep learning, enabling the deployment of Deep Neural Networks (DNNs) in resource-limited devices. Conventional Hard Filter Pruning (HFP) method zeroizes pruned filters and stops updating them, thus reducing the search space of the model. On the contrary, Soft Filter Pruning (SFP) simply zeroizes pruned filters, keeping updating them in the following training epochs, thus maintaining the capacity of the network. However, SFP, together with its variants, converges much slower than HFP due to its larger search space. Our question is whether SFP-based methods and HFP can be combined to achieve better performance and speed up convergence. Firstly, we generalize SFP-based methods and HFP to analyze their characteristics. Then we propose a Gradually Hard Filter Pruning (GHFP) method to smoothly switch from SFP-based methods to HFP during training and pruning, thus maintaining a large search space at first, gradually reducing the capacity of the model to ensure a moderate convergence speed. Experimental results on CIFAR-10/100 show that our method achieves the state-of-the-art performance.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源