论文标题
基于学习的假设检验有害协变量转移
A Learning Based Hypothesis Test for Harmful Covariate Shift
论文作者
论文摘要
在测试时间快速准确识别协变量转移的能力是部署在高风险域中的安全机器学习系统的重要组成部分。尽管存在用于检测何时不应在分发测试示例上做出预测的方法,但识别训练和测试时间之间的分布水平差异可以帮助确定何时应从部署设置中删除模型并进行重新训练。在这项工作中,我们将有害协变量转移(HCS)定义为分布的变化,可能会削弱预测模型的概括。为了检测HCS,我们使用经过培训的分类器合奏之间的不一致,以同意培训数据并在测试数据上不同意。我们得出了训练该合奏的损失函数,并表明分歧率和熵代表了HCS的强大歧视性统计。从经验上讲,我们证明了我们方法在各种高维数据集上使用统计确定性检测有害协变量转移的能力。与现有方法相比,在众多领域和模式中,我们显示出最先进的性能,尤其是当观察到的测试样品数量较小时。
The ability to quickly and accurately identify covariate shift at test time is a critical and often overlooked component of safe machine learning systems deployed in high-risk domains. While methods exist for detecting when predictions should not be made on out-of-distribution test examples, identifying distributional level differences between training and test time can help determine when a model should be removed from the deployment setting and retrained. In this work, we define harmful covariate shift (HCS) as a change in distribution that may weaken the generalization of a predictive model. To detect HCS, we use the discordance between an ensemble of classifiers trained to agree on training data and disagree on test data. We derive a loss function for training this ensemble and show that the disagreement rate and entropy represent powerful discriminative statistics for HCS. Empirically, we demonstrate the ability of our method to detect harmful covariate shift with statistical certainty on a variety of high-dimensional datasets. Across numerous domains and modalities, we show state-of-the-art performance compared to existing methods, particularly when the number of observed test samples is small.