论文标题

通过点云转换从单个图像中的新型视图合成

Novel View Synthesis from Single Images via Point Cloud Transformation

论文作者

Le, Hoang-An, Mensink, Thomas, Das, Partha, Gevers, Theo

论文摘要

在本文中,有一个论点是,对于对象的真实新视图综合,可以从任何角度合成对象,即显式3D形状表示。我们的方法估计点云以捕获对象的几何形状,可以将其自由旋转到所需的视图中,然后投影到新图像中。但是,此图像本质上很少,因此该粗略视图用作图像完成网络的输入,以获得密集的目标视图。点云是使用预测的像素深度图获得的,该图从单个RGB输入图像估计,并结合了摄像机内在。通过在输入视图和目标视图之间使用向前的翘曲和向后翘曲,可以端对端训练网络,而无需深入监督。在3D Shapenet基准上,将使用点云作为新型视图合成的显式3D形状的好处是在实验验证的。源代码和数据将在https://lhoangan.github.io/pc4novis/上找到。

In this paper the argument is made that for true novel view synthesis of objects, where the object can be synthesized from any viewpoint, an explicit 3D shape representation isdesired. Our method estimates point clouds to capture the geometry of the object, which can be freely rotated into the desired view and then projected into a new image. This image, however, is sparse by nature and hence this coarse view is used as the input of an image completion network to obtain the dense target view. The point cloud is obtained using the predicted pixel-wise depth map, estimated from a single RGB input image,combined with the camera intrinsics. By using forward warping and backward warpingbetween the input view and the target view, the network can be trained end-to-end without supervision on depth. The benefit of using point clouds as an explicit 3D shape for novel view synthesis is experimentally validated on the 3D ShapeNet benchmark. Source code and data will be available at https://lhoangan.github.io/pc4novis/.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源