论文标题
$π-$ nets:深层神经网络
$Π-$nets: Deep Polynomial Neural Networks
论文作者
论文摘要
深度卷积神经网络(DCNNS)目前是生成性的选择方法,也是计算机视觉和机器学习中的歧视性学习方法。 DCNN的成功可以归因于其构建块的仔细选择(例如,剩余块,整流器,复杂的归一化方案,但要提到一些)。在本文中,我们提出了$π$ -Nets,这是新的DCNN类。 $π$ -NET是多项式神经网络,即输出是输入的高阶多项式。 $π$ -NET可以使用特殊类型的跳过连接来实现,并且可以通过高阶张量表示它们的参数。我们从经验上证明,$π$ -NET具有比标准DCNN更好的表示功率,甚至在不使用非线性激活函数的情况下,它们甚至在大量的任务和信号中使用非线性激活功能,即图像,图形和音频。当与激活功能结合使用时,$π$ -NET会产生最新的最新任务,例如图像生成。最后,我们的框架阐明了为什么最近的生成模型(例如Stylegan)改进其前辈,例如Progan。
Deep Convolutional Neural Networks (DCNNs) is currently the method of choice both for generative, as well as for discriminative learning in computer vision and machine learning. The success of DCNNs can be attributed to the careful selection of their building blocks (e.g., residual blocks, rectifiers, sophisticated normalization schemes, to mention but a few). In this paper, we propose $Π$-Nets, a new class of DCNNs. $Π$-Nets are polynomial neural networks, i.e., the output is a high-order polynomial of the input. $Π$-Nets can be implemented using special kind of skip connections and their parameters can be represented via high-order tensors. We empirically demonstrate that $Π$-Nets have better representation power than standard DCNNs and they even produce good results without the use of non-linear activation functions in a large battery of tasks and signals, i.e., images, graphs, and audio. When used in conjunction with activation functions, $Π$-Nets produce state-of-the-art results in challenging tasks, such as image generation. Lastly, our framework elucidates why recent generative models, such as StyleGAN, improve upon their predecessors, e.g., ProGAN.