论文标题
神经集功能扩展:在高维度中使用离散功能学习
Neural Set Function Extensions: Learning with Discrete Functions in High Dimensions
论文作者
论文摘要
将离散域上的功能集成到神经网络中是开发其对离散对象推理的能力的关键。但是,离散域是(1)自然不适合基于梯度的优化,并且(2)与依赖于高维矢量空间中表示形式的深度学习体系结构不兼容。在这项工作中,我们解决了设置功能的两个困难,这些功能捕获了许多重要的离散问题。首先,我们开发了一个将设置功能扩展到低维连续域的框架,其中许多扩展是自然定义的。我们的框架将许多众所周知的扩展作为特殊情况。其次,为避免不良的低维神经网络瓶颈,我们将低维扩展转换为高维空间中的表示形式,从半预田计划进行组合优化的成功中获得了灵感。从经验上讲,我们观察到扩展对无监督的神经组合优化的益处,特别是具有高维其表示。
Integrating functions on discrete domains into neural networks is key to developing their capability to reason about discrete objects. But, discrete domains are (1) not naturally amenable to gradient-based optimization, and (2) incompatible with deep learning architectures that rely on representations in high-dimensional vector spaces. In this work, we address both difficulties for set functions, which capture many important discrete problems. First, we develop a framework for extending set functions onto low-dimensional continuous domains, where many extensions are naturally defined. Our framework subsumes many well-known extensions as special cases. Second, to avoid undesirable low-dimensional neural network bottlenecks, we convert low-dimensional extensions into representations in high-dimensional spaces, taking inspiration from the success of semidefinite programs for combinatorial optimization. Empirically, we observe benefits of our extensions for unsupervised neural combinatorial optimization, in particular with high-dimensional representations.