论文标题

使用渐进进化的神经架构搜索

Neural Architecture Search using Progressive Evolution

论文作者

Sinha, Nilotpal, Chen, Kuan-Wen

论文摘要

使用进化算法(EA)的香草神经体系结构搜索涉及通过从头开始训练它来评估每个体系结构,这非常耗时。可以通过使用超级网估计搜索空间中每个体系结构的适合度,因为其重量共享性质。但是,由于超级网中操作的共同适应,估计的健身性非常嘈杂。在这项工作中,我们提出了一种称为Pevonas的方法,其中整个神经体系结构搜索空间逐渐减小到具有良好体系结构的较小搜索空间区域。这是通过使用遗传算法在体系结构搜索过程中使用训练有素的超级网络进行体系结构评估来实现的,以查找具有良好体系结构的搜索空间区域。到达最终减少的搜索空间后,将使用超网搜索该搜索空间中的最佳体系结构。还可以通过使用权重继承来增强搜索,其中较小的搜索空间的超级网继承了先前训练的超级网的权重,以供较大的搜索空间。在驱时,Pevonas在CIFAR-10和CIFAR-100上可提供更好的结果,而与以前的基于EA的方法相比,使用明显较少的计算资源。可以在https://github.com/nightstorm0909/pevonas中找到我们论文的代码

Vanilla neural architecture search using evolutionary algorithms (EA) involves evaluating each architecture by training it from scratch, which is extremely time-consuming. This can be reduced by using a supernet to estimate the fitness of every architecture in the search space due to its weight sharing nature. However, the estimated fitness is very noisy due to the co-adaptation of the operations in the supernet. In this work, we propose a method called pEvoNAS wherein the whole neural architecture search space is progressively reduced to smaller search space regions with good architectures. This is achieved by using a trained supernet for architecture evaluation during the architecture search using genetic algorithm to find search space regions with good architectures. Upon reaching the final reduced search space, the supernet is then used to search for the best architecture in that search space using evolution. The search is also enhanced by using weight inheritance wherein the supernet for the smaller search space inherits its weights from previous trained supernet for the bigger search space. Exerimentally, pEvoNAS gives better results on CIFAR-10 and CIFAR-100 while using significantly less computational resources as compared to previous EA-based methods. The code for our paper can be found in https://github.com/nightstorm0909/pEvoNAS

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源