论文标题

重新思考肖像贴有隐私的贴图

Rethinking Portrait Matting with Privacy Preserving

论文作者

Ma, Sihan, Li, Jizhizi, Zhang, Jing, Zhang, He, Tao, Dacheng

论文摘要

最近,人们对机器学习中可识别的信息提出的隐私问题越来越关注。但是,以前的肖像贴图方法都是基于可识别的图像。为了填补空白,我们提出了P3M-10K,这是第一个大规模的匿名基准,用于隐私肖像垫(P3M)。 P3M-10K由10,421个高分辨率的脸部肖像图像以及高质量的Alpha哑光组成,这使我们能够系统地评估基于Trimap和Trimap的基于Trimap和Trimap的底漆方法,并获得了有关在隐私培训(PPT)设置下的模型通用能力的一些有用的发现。我们还提出了一种称为P3M-NET的统一矩阵模型,该模型与CNN和变压器骨架兼容。为了进一步减轻PPT设置下的跨域性能差距问题,我们设计了一种简单而有效的副本和粘贴策略(P3M-CP),该策略(P3M-CP)借用了公众名人图像中的面部信息,并指导网络在数据和功能级别上重新调查面部环境。对P3M-10K和公共基准的广泛实验表明,P3M-NET优于最先进的方法,以及P3M-CP在提高跨域泛化能力方面的有效性,这意味着P3M在未来的研究和现实世界中具有重要意义。

Recently, there has been an increasing concern about the privacy issue raised by identifiable information in machine learning. However, previous portrait matting methods were all based on identifiable images. To fill the gap, we present P3M-10k, which is the first large-scale anonymized benchmark for Privacy-Preserving Portrait Matting (P3M). P3M-10k consists of 10,421 high resolution face-blurred portrait images along with high-quality alpha mattes, which enables us to systematically evaluate both trimap-free and trimap-based matting methods and obtain some useful findings about model generalization ability under the privacy preserving training (PPT) setting. We also present a unified matting model dubbed P3M-Net that is compatible with both CNN and transformer backbones. To further mitigate the cross-domain performance gap issue under the PPT setting, we devise a simple yet effective Copy and Paste strategy (P3M-CP), which borrows facial information from public celebrity images and directs the network to reacquire the face context at both data and feature level. Extensive experiments on P3M-10k and public benchmarks demonstrate the superiority of P3M-Net over state-of-the-art methods and the effectiveness of P3M-CP in improving the cross-domain generalization ability, implying a great significance of P3M for future research and real-world applications.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源