论文标题
泼纳士:一种通用和样本有效的神经体系结构搜索框架
PredNAS: A Universal and Sample Efficient Neural Architecture Search Framework
论文作者
论文摘要
在本文中,我们提出了一个名为Presnas的神经建筑搜索(NAS)的一般有效框架。动机是给定可区分的性能估计功能,我们可以直接通过简单的梯度上升来优化朝着更高性能的体系结构。具体而言,我们采用神经预测因子作为性能预测指标。令人惊讶的是,泼纳斯可以在只有少量训练样本(小于100)的NAS基准上实现最先进的表现。为了验证我们方法的普遍性,我们还将我们的方法应用于大规模任务,并将我们的方法与Mscoco上的Imagenet和Yolox上的Regnet进行比较。结果表明,在特定的计算复杂性约束下,我们的泼纳斯可以探索具有竞争性能的新型体系结构。
In this paper, we present a general and effective framework for Neural Architecture Search (NAS), named PredNAS. The motivation is that given a differentiable performance estimation function, we can directly optimize the architecture towards higher performance by simple gradient ascent. Specifically, we adopt a neural predictor as the performance predictor. Surprisingly, PredNAS can achieve state-of-the-art performances on NAS benchmarks with only a few training samples (less than 100). To validate the universality of our method, we also apply our method on large-scale tasks and compare our method with RegNet on ImageNet and YOLOX on MSCOCO. The results demonstrate that our PredNAS can explore novel architectures with competitive performances under specific computational complexity constraints.