论文标题
通过表示形式表示:通过对抗学习的不变表示域的概括
Representation via Representations: Domain Generalization via Adversarially Learned Invariant Representations
论文作者
论文摘要
我们研究了用于学习{\ em公平表示}的审查技术的力量,以解决域的概括。我们检查了从多个“研究”(或域)学习不变表示的{\ em对抗性}的审查技术,其中每个研究都是根据域上的分布来绘制的。该映射在测试时间用于从新域中对实例进行分类。在许多情况下,例如医学预测,来自人口稠密地区的研究(数据很丰富)的领域概括,从地理上偏远的人群(不存在培训数据)提供了不同风味的公平性,在先前的关于算法公平的工作中没有预料。 我们研究了$ k $域的对抗性损失函数,并精确地将其限制行为描述为$ k $的增长,正式化和证明直觉,并得到实验的支持,可以观察到来自较大域的数据有帮助。限制结果伴随着非质子学习理论界限。此外,我们获得了足够的条件,以实现我们算法对以前看不见的域的良好最差预测性能。最后,我们将映射分解为两个组件,并根据这种分解提供了不变性的完整表征。据我们所知,我们的结果为对抗性不变域的概括提供了这些类型的首次正式保证。
We investigate the power of censoring techniques, first developed for learning {\em fair representations}, to address domain generalization. We examine {\em adversarial} censoring techniques for learning invariant representations from multiple "studies" (or domains), where each study is drawn according to a distribution on domains. The mapping is used at test time to classify instances from a new domain. In many contexts, such as medical forecasting, domain generalization from studies in populous areas (where data are plentiful), to geographically remote populations (for which no training data exist) provides fairness of a different flavor, not anticipated in previous work on algorithmic fairness. We study an adversarial loss function for $k$ domains and precisely characterize its limiting behavior as $k$ grows, formalizing and proving the intuition, backed by experiments, that observing data from a larger number of domains helps. The limiting results are accompanied by non-asymptotic learning-theoretic bounds. Furthermore, we obtain sufficient conditions for good worst-case prediction performance of our algorithm on previously unseen domains. Finally, we decompose our mappings into two components and provide a complete characterization of invariance in terms of this decomposition. To our knowledge, our results provide the first formal guarantees of these kinds for adversarial invariant domain generalization.