论文标题

神经细分

Neural Subdivision

论文作者

Liu, Hsueh-Ti Derek, Kim, Vladimir G., Chaudhuri, Siddhartha, Aigerman, Noam, Jacobson, Alec

论文摘要

本文介绍了神经细分,这是数据驱动的粗到细节建模的新型框架。在推断期间,我们的方法将粗三角形网格作为输入,并通过应用Loop细分的固定拓扑更新来递归将其细分为更细的几何形状,但是使用基于贴片的局部几何形状的神经网络来预测顶点位置。这种方法使我们能够学习复杂的非线性细分方案,而不是经典技术中使用的简单线性平均。我们的主要贡献之一是一种新颖的自学训练设置,该设置仅需要一组学习网络权重的高分辨率网格。对于任何训练形状,我们随机地生成了粗糙对应物的各种低分辨率离散化,同时保持了在细分过程中每个新顶点的确切目标位置的确切目标位置。这导致有条件网格的生成具有非常有效,准确的损失函数,并使我们能够训练一种跨离散化的方法,并有利于保留输出的歧管结构。在培训期间,我们优化了所有本地网格补丁的相同网络权重,从而提供了不受特定输入网格,固定属或类别的架构。我们的网络以旋转和翻译不变的方式在本地框架中编码补丁几何形状。共同的这些设计选择使我们的方法能够很好地概括,我们证明,即使在单个高分辨率网格上接受训练时,我们的方法也会为新型形状生成合理的细分。

This paper introduces Neural Subdivision, a novel framework for data-driven coarse-to-fine geometry modeling. During inference, our method takes a coarse triangle mesh as input and recursively subdivides it to a finer geometry by applying the fixed topological updates of Loop Subdivision, but predicting vertex positions using a neural network conditioned on the local geometry of a patch. This approach enables us to learn complex non-linear subdivision schemes, beyond simple linear averaging used in classical techniques. One of our key contributions is a novel self-supervised training setup that only requires a set of high-resolution meshes for learning network weights. For any training shape, we stochastically generate diverse low-resolution discretizations of coarse counterparts, while maintaining a bijective mapping that prescribes the exact target position of every new vertex during the subdivision process. This leads to a very efficient and accurate loss function for conditional mesh generation, and enables us to train a method that generalizes across discretizations and favors preserving the manifold structure of the output. During training we optimize for the same set of network weights across all local mesh patches, thus providing an architecture that is not constrained to a specific input mesh, fixed genus, or category. Our network encodes patch geometry in a local frame in a rotation- and translation-invariant manner. Jointly, these design choices enable our method to generalize well, and we demonstrate that even when trained on a single high-resolution mesh our method generates reasonable subdivisions for novel shapes.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源