论文标题

随机多重目标采样梯度下降

Stochastic Multiple Target Sampling Gradient Descent

论文作者

Phan, Hoang, Tran, Ngoc, Le, Trung, Tran, Toan, Ho, Nhat, Phung, Dinh

论文摘要

从非均衡目标分布中抽样是概率推断中许多应用的基本问题。 Stein变异梯度下降(SVGD)已被证明是一种强大的方法,可以迭代地更新一组粒子以近似关注的分布。此外,在分析其渐近属性时,SVGD会准确地减少到单目标优化问题,并可以看作是此单目标优化问题的概率版本。然后出现一个自然的问题:“我们可以得出多目标优化的概率版本吗?”。为了回答这个问题,我们提出了随机多重目标采样梯度下降(MT-SGD),使我们能够从多个非差异目标分布中采样。具体而言,我们的MT-SGD进行了中间分布的流动,逐渐取向多个目标分布,这使采样颗粒可以移动到目标分布的关节高样本区域。有趣的是,渐近分析表明,正如预期的那样,我们的方法准确地减少了多梯度下降算法以进行多目标优化。最后,我们进行了全面的实验,以证明我们进行多任务学习方法的优点。

Sampling from an unnormalized target distribution is an essential problem with many applications in probabilistic inference. Stein Variational Gradient Descent (SVGD) has been shown to be a powerful method that iteratively updates a set of particles to approximate the distribution of interest. Furthermore, when analysing its asymptotic properties, SVGD reduces exactly to a single-objective optimization problem and can be viewed as a probabilistic version of this single-objective optimization problem. A natural question then arises: "Can we derive a probabilistic version of the multi-objective optimization?". To answer this question, we propose Stochastic Multiple Target Sampling Gradient Descent (MT-SGD), enabling us to sample from multiple unnormalized target distributions. Specifically, our MT-SGD conducts a flow of intermediate distributions gradually orienting to multiple target distributions, which allows the sampled particles to move to the joint high-likelihood region of the target distributions. Interestingly, the asymptotic analysis shows that our approach reduces exactly to the multiple-gradient descent algorithm for multi-objective optimization, as expected. Finally, we conduct comprehensive experiments to demonstrate the merit of our approach to multi-task learning.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源