论文标题
Euclidnets:有效推断深度学习模型的替代操作
EuclidNets: An Alternative Operation for Efficient Inference of Deep Learning Models
论文作者
论文摘要
随着在边缘设备上进行深度学习应用程序的出现,研究人员会积极尝试优化其在低功率和受限内存设备上的部署。有建立的压缩方法,例如利用商品硬件的量化,修剪和架构搜索。除了常规的压缩算法外,还可以重新设计导致更有效实施的深度学习模型的操作。为此,我们提出了一种压缩方法Euclidnet,该方法旨在在硬件上实现,该硬件取代了乘法$ xw $,并用Euclidean距离$(x-w)^2 $。我们表明,Euclidnet与矩阵乘法对齐,并且可以用作卷积层的相似性的量度。此外,我们表明,在各种变换和噪声场景下,与设计乘法操作设计的深度学习模型相比,Euclidnet表现出相同的性能。
With the advent of deep learning application on edge devices, researchers actively try to optimize their deployments on low-power and restricted memory devices. There are established compression method such as quantization, pruning, and architecture search that leverage commodity hardware. Apart from conventional compression algorithms, one may redesign the operations of deep learning models that lead to more efficient implementation. To this end, we propose EuclidNet, a compression method, designed to be implemented on hardware which replaces multiplication, $xw$, with Euclidean distance $(x-w)^2$. We show that EuclidNet is aligned with matrix multiplication and it can be used as a measure of similarity in case of convolutional layers. Furthermore, we show that under various transformations and noise scenarios, EuclidNet exhibits the same performance compared to the deep learning models designed with multiplication operations.