论文标题

学习调节机制设计:利用设施位置的预测

Learning-Augmented Mechanism Design: Leveraging Predictions for Facility Location

论文作者

Agrawal, Priyank, Balkanski, Eric, Gkatzelis, Vasilis, Ou, Tingting, Tan, Xizhi

论文摘要

在这项工作中,我们介绍了一种替代模型,用于设计和分析策略性机制,这是由于最近在“学习效果算法”中的工作激增所激发的。为了补充计算机科学中传统方法,该方法基于最坏情况来分析算法的性能,这一工作重点是对算法的设计和分析,这些算法通过有关最佳解决方案的机器学习预测增强了算法。这些算法可以使用预测作为指导他们的决策,而目标是在这些预测准确(一致性)时获得更强的性能保证,同时也保持接近最佳的最差案例保证,即使这些预测非常不准确(鲁棒性)。到目前为止,这些结果仅限于算法,但是在这项工作中,我们认为该框架的另一个肥沃基础是机理设计。 我们启动了对参与代理商的私人信息的预测,对战略性防止机制进行了设计和分析。为了展示这种方法的重要好处,我们在二维欧几里得空间中与战略代理的设施位置的规范问题。我们研究平等主义和功利主义的社会成本功能,并提出了新的战略性防护机制,以利用预测来保证一致性和稳健性保证之间的最佳权衡。这为设计师提供了一个机制选项菜单,具体取决于她对预测准确性的信心。此外,我们还证明了参数化近似结果是预测误差的函数,表明即使预测不完全准确,我们的机制也表现良好。

In this work we introduce an alternative model for the design and analysis of strategyproof mechanisms that is motivated by the recent surge of work in "learning-augmented algorithms". Aiming to complement the traditional approach in computer science, which analyzes the performance of algorithms based on worst-case instances, this line of work has focused on the design and analysis of algorithms that are enhanced with machine-learned predictions regarding the optimal solution. The algorithms can use the predictions as a guide to inform their decisions, and the goal is to achieve much stronger performance guarantees when these predictions are accurate (consistency), while also maintaining near-optimal worst-case guarantees, even if these predictions are very inaccurate (robustness). So far, these results have been limited to algorithms, but in this work we argue that another fertile ground for this framework is in mechanism design. We initiate the design and analysis of strategyproof mechanisms that are augmented with predictions regarding the private information of the participating agents. To exhibit the important benefits of this approach, we revisit the canonical problem of facility location with strategic agents in the two-dimensional Euclidean space. We study both the egalitarian and utilitarian social cost functions, and we propose new strategyproof mechanisms that leverage predictions to guarantee an optimal trade-off between consistency and robustness guarantees. This provides the designer with a menu of mechanism options to choose from, depending on her confidence regarding the prediction accuracy. Furthermore, we also prove parameterized approximation results as a function of the prediction error, showing that our mechanisms perform well even when the predictions are not fully accurate.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源