论文标题

协变量分布意识到元学习

Covariate Distribution Aware Meta-learning

论文作者

Setlur, Amrith, Dingliwal, Saket, Poczos, Barnabas

论文摘要

事实证明,在整个回归,分类和强化学习范式中,元学习是成功的。最近的方法采用了贝叶斯的解释来通过量化后适应后估计值的不确定性来改善基于梯度的元学习者。这些作品中的大多数几乎完全忽略了任务的协变量分布$(p(x))与相应的条件分配$ p(y | x)$之间的潜在关系。在本文中,我们确定需要在层次贝叶斯框架中明确对任务协变量的荟萃分布进行建模。我们首先引入一个图形模型,该模型利用边际$ p(x)$的样本更好地推断后验,而不是条件分布$(p(y | x))$的最佳参数。基于此模型,我们通过在我们的最终目标中引入有意义的放松来提出一种计算可行的元学习算法。我们证明了基于初始化的元学习基线的算法在流行分类基准上的收益。最后,为了了解建模任务协变量的潜在优势,我们在合成回归数据集上进一步评估了我们的方法。

Meta-learning has proven to be successful for few-shot learning across the regression, classification, and reinforcement learning paradigms. Recent approaches have adopted Bayesian interpretations to improve gradient-based meta-learners by quantifying the uncertainty of the post-adaptation estimates. Most of these works almost completely ignore the latent relationship between the covariate distribution $(p(x))$ of a task and the corresponding conditional distribution $p(y|x)$. In this paper, we identify the need to explicitly model the meta-distribution over the task covariates in a hierarchical Bayesian framework. We begin by introducing a graphical model that leverages the samples from the marginal $p(x)$ to better infer the posterior over the optimal parameters of the conditional distribution $(p(y|x))$ for each task. Based on this model we propose a computationally feasible meta-learning algorithm by introducing meaningful relaxations in our final objective. We demonstrate the gains of our algorithm over initialization based meta-learning baselines on popular classification benchmarks. Finally, to understand the potential benefit of modeling task covariates we further evaluate our method on a synthetic regression dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源