论文标题
统计学习具有风险的条件价值
Statistical Learning with Conditional Value at Risk
论文作者
论文摘要
我们提出了一个规避风险的统计学习框架,其中学习算法的性能由损失的条件价值(CVAR)评估,而不是预期的损失。我们根据此框架设计基于随机梯度下降的算法。虽然现有的CVAR优化研究需要直接访问基础分布,但我们的算法认为仅给出了I.I.D. \样品的弱假设。对于凸和Lipschitz的损失功能,我们表明我们的算法具有$ O(1/\ sqrt {n})$ - 收敛到最佳CVAR,其中$ n $是样本的数量。对于非凸和平滑损耗函数,我们显示在CVAR上的概括。通过对各种机器学习任务进行数值实验,我们证明了与其他基线算法相比,我们的算法有效地最大程度地减少了CVAR。
We propose a risk-averse statistical learning framework wherein the performance of a learning algorithm is evaluated by the conditional value-at-risk (CVaR) of losses rather than the expected loss. We devise algorithms based on stochastic gradient descent for this framework. While existing studies of CVaR optimization require direct access to the underlying distribution, our algorithms make a weaker assumption that only i.i.d.\ samples are given. For convex and Lipschitz loss functions, we show that our algorithm has $O(1/\sqrt{n})$-convergence to the optimal CVaR, where $n$ is the number of samples. For nonconvex and smooth loss functions, we show a generalization bound on CVaR. By conducting numerical experiments on various machine learning tasks, we demonstrate that our algorithms effectively minimize CVaR compared with other baseline algorithms.