论文标题
UU-net:视觉监视视频录像的可逆脸部去识别
The UU-Net: Reversible Face De-Identification for Visual Surveillance Video Footage
论文作者
论文摘要
我们为低分辨率视频数据提出了一种可逆的面部去识别方法,该方法无法可靠地使用基于里程碑的技术。我们的解决方案能够生成符合数据保护法规的照片现实的去识别流,并可以在最小的隐私约束下公开发布。值得注意的是,此类流封装了以后重建原始场景所需的所有信息,这对于场景(例如犯罪调查)很有用,在犯罪调查中,对受试者的识别最为重要。我们描述了一个共同优化两个主要组成部分的学习过程:1)一个公共模块,该模块接收原始数据并生成取消识别流,其中ID信息以照片真实和无缝的方式替代; 2)一个专为法律/安全部门设计的私人模块,该模块分析了公共流并重建原始场景,并披露了场景中所有主题的实际ID。所提出的解决方案不含地标,并使用条件生成对抗网络生成可保留姿势,照明,背景信息甚至面部表情的合成面。此外,我们可以完全控制原始数据和去识别数据之间应保留的软面属性集,该数据范围范围范围范围范围范围范围。我们的实验是在三个不同的视觉监视数据集(Biodi,Mars和Pestre)中进行的,并显示出极大的令人鼓舞的结果。源代码可在https://github.com/hugomcp/uu-net上找到。
We propose a reversible face de-identification method for low resolution video data, where landmark-based techniques cannot be reliably used. Our solution is able to generate a photo realistic de-identified stream that meets the data protection regulations and can be publicly released under minimal privacy constraints. Notably, such stream encapsulates all the information required to later reconstruct the original scene, which is useful for scenarios, such as crime investigation, where the identification of the subjects is of most importance. We describe a learning process that jointly optimizes two main components: 1) a public module, that receives the raw data and generates the de-identified stream, where the ID information is surrogated in a photo-realistic and seamless way; and 2) a private module, designed for legal/security authorities, that analyses the public stream and reconstructs the original scene, disclosing the actual IDs of all the subjects in the scene. The proposed solution is landmarks-free and uses a conditional generative adversarial network to generate synthetic faces that preserve pose, lighting, background information and even facial expressions. Also, we enable full control over the set of soft facial attributes that should be preserved between the raw and de-identified data, which broads the range of applications for this solution. Our experiments were conducted in three different visual surveillance datasets (BIODI, MARS and P-DESTRE) and showed highly encouraging results. The source code is available at https://github.com/hugomcp/uu-net.