论文标题
从在线鞋胎面照片中创建鞋纸的法医数据库
Creating a Forensic Database of Shoeprints from Online Shoe Tread Photos
论文作者
论文摘要
鞋胎面印象是犯罪现场留下的最常见的证据类型之一。但是,这种证据的效用受到缺乏覆盖大量不同鞋类模型的鞋类印刷数据库的限制。此外,该数据库首选包含鞋子捕获照片的3D形状或深度,以便提取鞋纸以匹配查询(犯罪现场)印刷品。我们建议通过利用在线零售商收集的鞋类捕获照片来解决这一差距。核心挑战是预测这些照片的深度图。由于它们没有允许训练深度预测指标的基础3D形状,因此我们利用了可行的合成数据。我们开发了一种称为浅层的方法,该方法通过利用完全监督的合成数据和无监督的零售图像数据来预测深度来预测深度。特别是,我们发现域的适应性和内在图像分解技术有效地减轻了合成域的间隙,并产生了更好的深度预测。为了验证我们的方法,我们介绍了2个由鞋类轨迹图像和打印对组成的验证集,并定义了基准协议以量化预测深度的质量。在此基准上,鞋匠的表现优于现有的深度预测和合成域的适应方法。
Shoe tread impressions are one of the most common types of evidence left at crime scenes. However, the utility of such evidence is limited by the lack of databases of footwear prints that cover the large and growing number of distinct shoe models. Moreover, the database is preferred to contain the 3D shape, or depth, of shoe-tread photos so as to allow for extracting shoeprints to match a query (crime-scene) print. We propose to address this gap by leveraging shoe-tread photos collected by online retailers. The core challenge is to predict depth maps for these photos. As they do not have ground-truth 3D shapes allowing for training depth predictors, we exploit synthetic data that does. We develop a method termed ShoeRinsics that learns to predict depth by leveraging a mix of fully supervised synthetic data and unsupervised retail image data. In particular, we find domain adaptation and intrinsic image decomposition techniques effectively mitigate the synthetic-real domain gap and yield significantly better depth prediction. To validate our method, we introduce 2 validation sets consisting of shoe-tread image and print pairs and define a benchmarking protocol to quantify the quality of predicted depth. On this benchmark, ShoeRinsics outperforms existing methods of depth prediction and synthetic-to-real domain adaptation.