论文标题

深度自动说明

Deep AutoAugment

论文作者

Zheng, Yu, Zhang, Zhi, Yan, Shen, Zhang, Mi

论文摘要

尽管最近的自动数据增强方法导致了最新的结果,但它们的设计空间和派生的数据增强策略仍然包含强大的人类先验。在这项工作中,我们没有在搜索数据增强范围内修复一组手工挑选的默认增强功能,而是提出了一种完全自动化的数据增强搜索方法,该方法名为Deep Autoaughment(DEEPAA)。 DeePaa通过一次堆叠增强层一次直至达到收敛,从而逐渐从头开始构建多层数据增强管道。对于每个增强层,该策略将被优化,以最大程度地提高原始数据的梯度和沿方向增强数据之间的相似性,而差异较低。我们的实验表明,即使没有默认增强,我们也可以学习增强政策,以实现以前的作品的绩效。广泛的消融研究表明,正规梯度匹配是用于数据增强策略的有效搜索方法。我们的代码可在以下网址提供:https://github.com/msu-mlsys-lab/deepaa。

While recent automated data augmentation methods lead to state-of-the-art results, their design spaces and the derived data augmentation strategies still incorporate strong human priors. In this work, instead of fixing a set of hand-picked default augmentations alongside the searched data augmentations, we propose a fully automated approach for data augmentation search named Deep AutoAugment (DeepAA). DeepAA progressively builds a multi-layer data augmentation pipeline from scratch by stacking augmentation layers one at a time until reaching convergence. For each augmentation layer, the policy is optimized to maximize the cosine similarity between the gradients of the original and augmented data along the direction with low variance. Our experiments show that even without default augmentations, we can learn an augmentation policy that achieves strong performance with that of previous works. Extensive ablation studies show that the regularized gradient matching is an effective search method for data augmentation policies. Our code is available at: https://github.com/MSU-MLSys-Lab/DeepAA .

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源