论文标题

可区分的隐式层

Differentiable Implicit Layers

论文作者

Look, Andreas, Doneva, Simona, Kandemir, Melih, Gemulla, Rainer, Peters, Jan

论文摘要

在本文中,我们引入了一种有效的反向传播方案,以实现无限制的隐式函数。这些功能由一组可学习的权重参数化,并且可能取决于某些输入。使它们完全适合神经网络中的可学习层。我们在不同的应用程序上演示了我们的方案:(i)具有隐式Euler方法的神经ODE,以及(ii)模型预测性控制中的系统识别。

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源