论文标题

ASKEWSGD:一种退火的间隔约束优化方法,用于训练量化的神经网络

AskewSGD : An Annealed interval-constrained Optimisation method to train Quantized Neural Networks

论文作者

Leconte, Louis, Schechtman, Sholom, Moulines, Eric

论文摘要

在本文中,我们开发了一种新的算法,退火偏斜的sgd -askewsgd-用于训练具有量化权重的深神经网络(DNNS)。首先,我们将量化神经网络(QNN)的训练作为一系列间隔受限的优化问题序列。然后,我们提出了一种新的一阶随机方法Askewsgd,以求解每个受约束的优化子问题。与具有主动集和可行方向的算法不同,Askewsgd避免了整个可行集合下的投影或优化,并允许迭代的迭代。 AskewsGD的数值复杂性与现有的训练QNN方法相媲美,例如二进制连接中使用的直通梯度估计器或其他最先进的方法(Proxquant,LUQ)。我们为ASKEWSGD建立收敛保证(根据目标函数的一般假设)。实验结果表明,AskeWSGD算法的性能要比经典基准测试中的最先进的方法更好或相同。

In this paper, we develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights. First, we formulate the training of quantized neural networks (QNNs) as a smoothed sequence of interval-constrained optimization problems. Then, we propose a new first-order stochastic method, AskewSGD, to solve each constrained optimization subproblem. Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set and allows iterates that are infeasible. The numerical complexity of AskewSGD is comparable to existing approaches for training QNNs, such as the straight-through gradient estimator used in BinaryConnect, or other state of the art methods (ProxQuant, LUQ). We establish convergence guarantees for AskewSGD (under general assumptions for the objective function). Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源