论文标题

使用不变推理验证复发性神经网络

Verifying Recurrent Neural Networks using Invariant Inference

论文作者

Jacoby, Yuval, Barrett, Clark, Katz, Guy

论文摘要

深度神经网络正在彻底改变复杂系统的发展方式。但是,这些自动生成的网络对人类来说是不透明的,因此很难对它们进行推理并保证它们的正确性。在这里,我们提出了一种新的方法,用于验证神经网络广泛变体的特性,称为复发性神经网络。复发性神经网络在例如自然语言处理中起着关键作用,其验证对于保证许多关键系统的可靠性至关重要。我们的方法基于不变性的推论,这使我们能够将验证经常性网络验证为更简单,非透明问题的复杂问题。我们方法的概念概念证明实施的实验表明,它比最新的状态执行魔力命令。

Deep neural networks are revolutionizing the way complex systems are developed. However, these automatically-generated networks are opaque to humans, making it difficult to reason about them and guarantee their correctness. Here, we propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks. Recurrent neural networks play a key role in, e.g., natural language processing, and their verification is crucial for guaranteeing the reliability of many critical systems. Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems. Experiments with a proof-of-concept implementation of our approach demonstrate that it performs orders-of-magnitude better than the state of the art.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源