论文标题
确切:如何训练您的准确性
EXACT: How to Train Your Accuracy
论文作者
论文摘要
通常根据准确性评估分类任务。但是,准确性是不连续的,不能使用梯度上升直接优化。流行方法最大程度地减少了跨凝性,铰链损失或其他替代损失,这可能会导致次优结果。在本文中,我们通过将随机性引入模型的输出并优化预期准确性,即随机模型的准确性来提出一个新的优化框架。对线性模型和深层图像分类的广泛实验表明,提出的优化方法是广泛使用分类损失的强大替代方法。
Classification tasks are usually evaluated in terms of accuracy. However, accuracy is discontinuous and cannot be directly optimized using gradient ascent. Popular methods minimize cross-entropy, hinge loss, or other surrogate losses, which can lead to suboptimal results. In this paper, we propose a new optimization framework by introducing stochasticity to a model's output and optimizing expected accuracy, i.e. accuracy of the stochastic model. Extensive experiments on linear models and deep image classification show that the proposed optimization method is a powerful alternative to widely used classification losses.