论文标题

使用KD-Tree分层的抖动样品

Jittering Samples using a kd-Tree Stratification

论文作者

Keros, Alexandros D., Divakaran, Divakaran, Subr, Kartic

论文摘要

蒙特卡洛采样技术用于估计高维积分,以对计算机图形应用程序的虚拟场景中的光传输物理进行建模。这些方法依赖大量定律来通过模拟来估计期望,通常导致收敛缓慢。它们的错误通常在图像合成算法产生的图片中表现为不良谷物。众所周知,当适当选择样品时,这些错误会减少。一种众所周知的减少误差的技术是通过细分整合域,估计每个\ emph {stratum}中的积分并将这些值集成到分层采样估计中的。已知基于晶格(网格)的幼稚分层方法可以提高蒙特卡洛的收敛速率,但需要以域的尺寸呈指数生长的样品。 我们建议使用KD-TREE数据结构为$ D $尺寸高管提出一个简单的分层方案。我们的方案使矩形域的任意数量的相等数量分区可以生成,并且可以在$ o(n)$时间中生成$ n $样本。由于我们不一定总是需要明确构建KD-Tree,因此我们提供了一个简单的过程,该过程允许在没有任何预先计算或存储的情况下完全并行绘制样品设置,在$ n $内执行时,将采样加速到$ o(\ log log n)$。如果树被隐式预先计算($ o(n)$存储),则并行运行时间将$ n $内核上的$ o(1)$减少。除这些好处外,我们还为$ n $样品的最坏情况下的较高案例提供了与基于晶格的采样策略相匹配的$ n $样品,这是我们提出的方法的特殊情况。我们使用许多定量和定性测试将我们的方法与图像合成的最先进的采样器进行比较。

Monte Carlo sampling techniques are used to estimate high-dimensional integrals that model the physics of light transport in virtual scenes for computer graphics applications. These methods rely on the law of large numbers to estimate expectations via simulation, typically resulting in slow convergence. Their errors usually manifest as undesirable grain in the pictures generated by image synthesis algorithms. It is well known that these errors diminish when the samples are chosen appropriately. A well known technique for reducing error operates by subdividing the integration domain, estimating integrals in each \emph{stratum} and aggregating these values into a stratified sampling estimate. Naïve methods for stratification, based on a lattice (grid) are known to improve the convergence rate of Monte Carlo, but require samples that grow exponentially with the dimensionality of the domain. We propose a simple stratification scheme for $d$ dimensional hypercubes using the kd-tree data structure. Our scheme enables the generation of an arbitrary number of equal volume partitions of the rectangular domain, and $n$ samples can be generated in $O(n)$ time. Since we do not always need to explicitly build a kd-tree, we provide a simple procedure that allows the sample set to be drawn fully in parallel without any precomputation or storage, speeding up sampling to $O(\log n)$ time per sample when executed on $n$ cores. If the tree is implicitly precomputed ($O(n)$ storage) the parallelised run time reduces to $O(1)$ on $n$ cores. In addition to these benefits, we provide an upper bound on the worst case star-discrepancy for $n$ samples matching that of lattice-based sampling strategies, which occur as a special case of our proposed method. We use a number of quantitative and qualitative tests to compare our method against state of the art samplers for image synthesis.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源