论文标题
混合和混合:面部表情识别的增强方法
MixAugment & Mixup: Augmentation Methods for Facial Expression Recognition
论文作者
论文摘要
自从面部表情在人类交流中起着核心作用以来,自动面部表情识别(FER)在过去20年中引起了人们的关注。大多数FER方法都使用深层神经网络(DNN),这些神经网络在数据分析方面是强大的工具。但是,尽管具有力量,但这些网络倾向于过度拟合,因为它们通常倾向于记住培训数据。此外,目前没有很多用于FER的大型数据库(即在不受约束的环境中)。为了减轻此问题,已经提出了许多数据扩展技术。数据增强是一种通过对原始数据应用约束转换来增加可用数据多样性的方法。一项这样的技术是为各种分类任务做出积极贡献的一种技术。据此,DNN通过成对的示例及其相应标签的凸组组合进行了训练。在本文中,我们研究了混合在野外费用中的有效性,其中数据在头部姿势,照明条件,背景和环境中具有较大的变化。然后,我们提出了一种基于混音的新数据增强策略,称为Mixaugment。据此,该网络是通过虚拟示例和真实示例的结合而同时培训的。所有这些示例都有助于总体损失函数。我们进行了一项广泛的实验研究,证明了混合声对混合和各种最新方法的有效性。我们进一步研究了辍学与混合和混合声的组合,以及其他数据增强技术与混合声的组合。
Automatic Facial Expression Recognition (FER) has attracted increasing attention in the last 20 years since facial expressions play a central role in human communication. Most FER methodologies utilize Deep Neural Networks (DNNs) that are powerful tools when it comes to data analysis. However, despite their power, these networks are prone to overfitting, as they often tend to memorize the training data. What is more, there are not currently a lot of in-the-wild (i.e. in unconstrained environment) large databases for FER. To alleviate this issue, a number of data augmentation techniques have been proposed. Data augmentation is a way to increase the diversity of available data by applying constrained transformations on the original data. One such technique, which has positively contributed to various classification tasks, is Mixup. According to this, a DNN is trained on convex combinations of pairs of examples and their corresponding labels. In this paper, we examine the effectiveness of Mixup for in-the-wild FER in which data have large variations in head poses, illumination conditions, backgrounds and contexts. We then propose a new data augmentation strategy which is based on Mixup, called MixAugment. According to this, the network is trained concurrently on a combination of virtual examples and real examples; all these examples contribute to the overall loss function. We conduct an extensive experimental study that proves the effectiveness of MixAugment over Mixup and various state-of-the-art methods. We further investigate the combination of dropout with Mixup and MixAugment, as well as the combination of other data augmentation techniques with MixAugment.