论文标题

测试:测试时间自我训练在分配转移下

TeST: Test-time Self-Training under Distribution Shift

论文作者

Sinha, Samarth, Gehler, Peter, Locatello, Francesco, Schiele, Bernt

论文摘要

尽管他们最近取得了成功,但在测试时遇到分配变化时,深层神经网络的表现仍然很差。最近,许多提出的方法试图通过将模型与推理之前的新分布对齐来解决这一问题。由于没有可用的标签,因此需要无监督的目标才能将模型调整在观察到的测试数据上。在本文中,我们提出了测试时间自我训练(测试):一种技术,该技术在测试时以某些源数据和新的数据分配为输入,并使用学生教师框架学习了不变且强大的表示。我们发现使用测试适应的模型可以显着改善基线测试时间适应算法。测试可实现现代领域适应算法的竞争性能,同时自适应时访问5-10倍的数据。我们在两个任务上彻底评估了各种基线:对象检测和图像分割,并发现该模型适用于测试。我们发现测试设置了用于测试时间域适应算法的新最新技术。

Despite their recent success, deep neural networks continue to perform poorly when they encounter distribution shifts at test time. Many recently proposed approaches try to counter this by aligning the model to the new distribution prior to inference. With no labels available this requires unsupervised objectives to adapt the model on the observed test data. In this paper, we propose Test-Time Self-Training (TeST): a technique that takes as input a model trained on some source data and a novel data distribution at test time, and learns invariant and robust representations using a student-teacher framework. We find that models adapted using TeST significantly improve over baseline test-time adaptation algorithms. TeST achieves competitive performance to modern domain adaptation algorithms, while having access to 5-10x less data at time of adaption. We thoroughly evaluate a variety of baselines on two tasks: object detection and image segmentation and find that models adapted with TeST. We find that TeST sets the new state-of-the art for test-time domain adaptation algorithms.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源