论文标题
深度消失点检测:几何先验使数据集变化消失
Deep vanishing point detection: Geometric priors make dataset variations vanish
论文作者
论文摘要
深度学习改善了图像中的消失点检测。然而,深网需要昂贵的注释数据集,经过昂贵的硬件培训,并且不会概括到略有不同的域和较小的问题变体。在这里,我们通过将深层消失的检测网络注入先验知识来解决这些问题。这种先验知识不再需要从数据中学习,节省有价值的注释工作并计算,解锁现实的几个样本场景,并减少域变化的影响。此外,先验的解释性允许将深层网络适应较小的问题变化,例如曼哈顿和非曼哈顿世界之间的切换。我们无缝融合了两个几何先验:(i)霍夫变换 - 将图像像素映射到直线,(ii)高斯球 - 将线映射到巨大的圆圈,其相交表示消失的点。在实验上,我们消除了我们的选择,并显示出与大数据设置中现有模型的可比精度。我们验证了模型的提高数据效率,对域变化的鲁棒性以及对非曼哈顿设置的适应性。
Deep learning has improved vanishing point detection in images. Yet, deep networks require expensive annotated datasets trained on costly hardware and do not generalize to even slightly different domains, and minor problem variants. Here, we address these issues by injecting deep vanishing point detection networks with prior knowledge. This prior knowledge no longer needs to be learned from data, saving valuable annotation efforts and compute, unlocking realistic few-sample scenarios, and reducing the impact of domain changes. Moreover, the interpretability of the priors allows to adapt deep networks to minor problem variations such as switching between Manhattan and non-Manhattan worlds. We seamlessly incorporate two geometric priors: (i) Hough Transform -- mapping image pixels to straight lines, and (ii) Gaussian sphere -- mapping lines to great circles whose intersections denote vanishing points. Experimentally, we ablate our choices and show comparable accuracy to existing models in the large-data setting. We validate our model's improved data efficiency, robustness to domain changes, adaptability to non-Manhattan settings.