论文标题
救援物理学:高速成像的深度非视线重建
Physics to the Rescue: Deep Non-line-of-sight Reconstruction for High-speed Imaging
论文作者
论文摘要
计算方法在拐角处或非线视线(NLOS)成像,由于成像硬件和重建算法的重大进展,它已成为现实。 Nam等人最近朝着实用NLOS成像的发展。展示了一个高速非共焦成像系统,其在5Hz处运行,比以前的艺术快100倍。然而,这种巨大的采集率增益需要在光传输中进行大量近似,从而破坏了许多假定理想化图像形成模型的现有NLOS重建方法。为了弥合差距,我们提出了一个新颖的深层模型,该模型结合了波传播和体积渲染的互补物理学先验,以进行高质量和稳健的NLOS重建的神经网络。该精心策划的设计通过放松图像形成模型来定期解决方案空间,从而产生了一个深层模型,尽管在合成数据中受过专门的培训,但在真实捕获上却很好地概括了。此外,我们设计了一个统一的学习框架,使我们的模型能够使用各种监督信号(包括目标强度图像甚至RAW NLOS瞬态测量值)灵活训练我们的模型。一旦受过训练,我们的模型就会在一次向前传球中的推理时间呈现强度和深度图像,能够在高端GPU上处理超过5个以上的捕获。通过广泛的定性和定量实验,我们表明我们的方法的表现优于先前的物理和基于学习的方法,同时基于合成和实际测量。我们预计,我们的方法以及快速捕获系统将加速NLOS成像的未来开发,用于需要高速成像的现实世界应用。
Computational approach to imaging around the corner, or non-line-of-sight (NLOS) imaging, is becoming a reality thanks to major advances in imaging hardware and reconstruction algorithms. A recent development towards practical NLOS imaging, Nam et al. demonstrated a high-speed non-confocal imaging system that operates at 5Hz, 100x faster than the prior art. This enormous gain in acquisition rate, however, necessitates numerous approximations in light transport, breaking many existing NLOS reconstruction methods that assume an idealized image formation model. To bridge the gap, we present a novel deep model that incorporates the complementary physics priors of wave propagation and volume rendering into a neural network for high-quality and robust NLOS reconstruction. This orchestrated design regularizes the solution space by relaxing the image formation model, resulting in a deep model that generalizes well on real captures despite being exclusively trained on synthetic data. Further, we devise a unified learning framework that enables our model to be flexibly trained using diverse supervision signals, including target intensity images or even raw NLOS transient measurements. Once trained, our model renders both intensity and depth images at inference time in a single forward pass, capable of processing more than 5 captures per second on a high-end GPU. Through extensive qualitative and quantitative experiments, we show that our method outperforms prior physics and learning based approaches on both synthetic and real measurements. We anticipate that our method along with the fast capturing system will accelerate future development of NLOS imaging for real world applications that require high-speed imaging.