论文标题
探索神经隐含中的差异几何形状
Exploring Differential Geometry in Neural Implicits
论文作者
论文摘要
我们引入了一个神经隐式框架,该框架利用神经网络的可区分特性以及点采样表面的离散几何形状,以将它们作为神经隐含功能的级别集近似。 为了训练神经隐式函数,我们提出了一个近似签名距离函数的损耗功能,并允许具有高阶导数的术语,例如曲率的主要方向之间的对齐方式,以了解更多几何细节。在训练过程中,我们根据点采样表面的曲率考虑了一种不均匀的采样策略,以使用更多几何细节来确定点的优先级。与以前的方法相比,这种抽样意味着在保持几何准确性的同时更快地学习。 我们还使用神经隐式函数的分析衍生物来估计基础采样表面的差异度量。
We introduce a neural implicit framework that exploits the differentiable properties of neural networks and the discrete geometry of point-sampled surfaces to approximate them as the level sets of neural implicit functions. To train a neural implicit function, we propose a loss functional that approximates a signed distance function, and allows terms with high-order derivatives, such as the alignment between the principal directions of curvature, to learn more geometric details. During training, we consider a non-uniform sampling strategy based on the curvatures of the point-sampled surface to prioritize points with more geometric details. This sampling implies faster learning while preserving geometric accuracy when compared with previous approaches. We also use the analytical derivatives of a neural implicit function to estimate the differential measures of the underlying point-sampled surface.