论文标题
CP-NAS:儿童神经建筑搜索二进制神经网络
CP-NAS: Child-Parent Neural Architecture Search for Binary Neural Networks
论文作者
论文摘要
神经体系结构搜索(NAS)被证明是通过产生应用程序自适应神经体系结构的最佳方法之一,这仍然受到高度计算成本和记忆消耗的挑战。同时,具有二元重量和激活的1位卷积神经网络(CNN)显示了它们在资源有限的嵌入式设备上的潜力。一种自然的方法是使用1位CNN通过在统一框架中利用每种强度的优势来降低NAS的计算和记忆成本。为此,将儿童父母(CP)模型介绍给可区分的NAS,以在完整模型(父母)的监督下搜索二进制架构(儿童)。在搜索阶段,儿童父母模型使用儿童和父模型精度生成的指标来评估性能并放弃潜力较小的操作。在训练阶段,引入了内核级别的CP损失,以优化二进制网络。广泛的实验表明,所提出的CP-NAS在CIFAR和Imagenet数据库上与传统NAS具有可比的精度。它可以在CIFAR-10上实现$ 95.27 \%$的准确性,Imagenet上的$ 64.3 \%$具有二进制重量和激活,而$ 30 \%$ $的搜索速度比以前的艺术更快。
Neural architecture search (NAS) proves to be among the best approaches for many tasks by generating an application-adaptive neural architecture, which is still challenged by high computational cost and memory consumption. At the same time, 1-bit convolutional neural networks (CNNs) with binarized weights and activations show their potential for resource-limited embedded devices. One natural approach is to use 1-bit CNNs to reduce the computation and memory cost of NAS by taking advantage of the strengths of each in a unified framework. To this end, a Child-Parent (CP) model is introduced to a differentiable NAS to search the binarized architecture (Child) under the supervision of a full-precision model (Parent). In the search stage, the Child-Parent model uses an indicator generated by the child and parent model accuracy to evaluate the performance and abandon operations with less potential. In the training stage, a kernel-level CP loss is introduced to optimize the binarized network. Extensive experiments demonstrate that the proposed CP-NAS achieves a comparable accuracy with traditional NAS on both the CIFAR and ImageNet databases. It achieves the accuracy of $95.27\%$ on CIFAR-10, $64.3\%$ on ImageNet with binarized weights and activations, and a $30\%$ faster search than prior arts.