论文标题
QDROP:对极低位置训练后量化的随机删除量化
QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization
论文作者
论文摘要
最近,训练后量化(PTQ)引起了很多关注,以产生有效的神经网络而无需长期重新培训。尽管成本较低,但当前的PTQ工作往往在极低的设置下失败。在这项研究中,我们开创性地确认,将激活量化正确纳入PTQ重建中有益于最终精度。要深入了解固有的原因,建立了一个理论框架,表明优化的低位模型在校准和测试数据上的平坦度至关重要。根据结论,提出了一种简单而有效的方法,称为QDROP,该方法随机删除PTQ期间的激活的量化。关于各种任务的广泛实验,包括计算机视觉(图像分类,对象检测)和自然语言处理(文本分类和问题答案)证明了其优越性。使用QDROP,PTQ的极限首次将其推向2位激活,并且精度提升最多可以高达51.49%。 QDROP没有铃铛和口哨,为PTQ建立了新的艺术状态。我们的代码可在https://github.com/wimh966/qdrop上获得,并已集成到MQBench(https://github.com/modeltc/mqbench)
Recently, post-training quantization (PTQ) has driven much attention to produce efficient neural networks without long-time retraining. Despite its low cost, current PTQ works tend to fail under the extremely low-bit setting. In this study, we pioneeringly confirm that properly incorporating activation quantization into the PTQ reconstruction benefits the final accuracy. To deeply understand the inherent reason, a theoretical framework is established, indicating that the flatness of the optimized low-bit model on calibration and test data is crucial. Based on the conclusion, a simple yet effective approach dubbed as QDROP is proposed, which randomly drops the quantization of activations during PTQ. Extensive experiments on various tasks including computer vision (image classification, object detection) and natural language processing (text classification and question answering) prove its superiority. With QDROP, the limit of PTQ is pushed to the 2-bit activation for the first time and the accuracy boost can be up to 51.49%. Without bells and whistles, QDROP establishes a new state of the art for PTQ. Our code is available at https://github.com/wimh966/QDrop and has been integrated into MQBench (https://github.com/ModelTC/MQBench)