论文标题
对抗性与基于行为的防御性AI,具有持续,积极学习:自动评估欺骗性,中毒和概念漂移
Adversarial vs behavioural-based defensive AI with joint, continual and active learning: automated evaluation of robustness to deception, poisoning and concept drift
论文作者
论文摘要
人工智能(AI)的最新进步已为行为分析(UEBA)带来了用于网络安全性的新能力,包括基于在信息系统上观察到的事件的异常事件的敌对行动,在我们先前的工作中。 UEBA系统。这导致了假阳性和假负率降低,提高了警报解释性,同时保持实时性能和可伸缩性。但是,我们没有通过随着时间的流逝来解决行为的自然演变,也称为概念漂移。为了维持有效的检测能力,必须对基于异常的检测系统进行持续训练,这为对手打开了一扇门,可以通过逐步提炼行为模型内部的未注明攻击痕迹,直到完全攻击被认为是正常的,可以通过逐步提炼出未注明的攻击痕迹来进行所谓的“幼蛙”攻击。在本文中,我们提出了一种解决方案,可以通过改善检测过程并有效利用人类专业知识来有效地减轻这种攻击。我们还介绍了进行对抗性AI进行欺骗攻击的初步工作,从术语中,该攻击将用于帮助评估和改善防御系统。这些防御性和进攻性的AI实施联合,持续和积极的学习,这是评估,验证和认证基于AI的防御解决方案所必需的一步。
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security consisting in the detection of hostile action based on the unusual nature of events observed on the Information System.In our previous work (presented at C\&ESAR 2018 and FIC 2019), we have associated deep neural networks auto-encoders for anomaly detection and graph-based events correlation to address major limitations in UEBA systems. This resulted in reduced false positive and false negative rates, improved alert explainability, while maintaining real-time performances and scalability. However, we did not address the natural evolution of behaviours through time, also known as concept drift. To maintain effective detection capabilities, an anomaly-based detection system must be continually trained, which opens a door to an adversary that can conduct the so-called "frog-boiling" attack by progressively distilling unnoticed attack traces inside the behavioural models until the complete attack is considered normal. In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise. We also present preliminary work on adversarial AI conducting deception attack, which, in term, will be used to help assess and improve the defense system. These defensive and offensive AI implement joint, continual and active learning, in a step that is necessary in assessing, validating and certifying AI-based defensive solutions.