论文标题

强大的量化:一个统治所有这些的模型

Robust Quantization: One Model to Rule Them All

论文作者

Shkolnik, Moran, Chmiel, Brian, Banner, Ron, Shomron, Gil, Nahshan, Yury, Bronstein, Alex, Weiser, Uri

论文摘要

神经网络量化方法通常涉及在训练过程中模拟量化过程,从而使受过训练的模型高度依赖于目标位宽度和精确的量化方式。强大的量化提供了一种替代方法,并提高了对不同类别的数据类型和量化策略的耐受性。它打开了新的令人兴奋的应用程序,在这些应用程序中,量化过程不是静态的,并且可以不同以满足不同的情况和实现。为了解决这个问题,我们提出了一种通过广泛量化过程为模型提供内在鲁棒性的方法。我们的方法是由理论论点激励的,使我们能够存储能够在各种位宽度和量化策略上运行的单一通用模型。我们验证方法对不同的成像网模型的有效性。

Neural network quantization methods often involve simulating the quantization process during training, making the trained model highly dependent on the target bit-width and precise way quantization is performed. Robust quantization offers an alternative approach with improved tolerance to different classes of data-types and quantization policies. It opens up new exciting applications where the quantization process is not static and can vary to meet different circumstances and implementations. To address this issue, we propose a method that provides intrinsic robustness to the model against a broad range of quantization processes. Our method is motivated by theoretical arguments and enables us to store a single generic model capable of operating at various bit-widths and quantization policies. We validate our method's effectiveness on different ImageNet models.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源