论文标题
通过对抗强化学习的稳健市场
Robust Market Making via Adversarial Reinforcement Learning
论文作者
论文摘要
我们表明,对抗性增强学习(ARL)可用于生产市场标记药,这些市场对对抗性和适应性选择的市场状况具有牢固性。要应用ARL,我们将Avellaneda和Stoikov [2008]的经过精心培训的单一代理模型变成了做市商和对手之间的离散时间零和游戏。对手充当其他市场参与者的代理,希望以市销商的费用获利。我们从经验上将两种常规的单位AGENT RL代理与ARL进行了比较,并表明我们的ARL方法导致:1)没有约束或特定领域的惩罚的规避风险行为的出现; 2)在测试环境中有或没有对手的一组标准指标的性能的显着改善,并且; 3)改善了建模不确定性的鲁棒性。我们从经验上证明,我们的ARL方法始终如一地收敛,并且我们证明了几种特殊情况,我们收敛到简化的单阶段游戏中与Nash Equilibria相对应。
We show that adversarial reinforcement learning (ARL) can be used to produce market marking agents that are robust to adversarial and adaptively-chosen market conditions. To apply ARL, we turn the well-studied single-agent model of Avellaneda and Stoikov [2008] into a discrete-time zero-sum game between a market maker and adversary. The adversary acts as a proxy for other market participants that would like to profit at the market maker's expense. We empirically compare two conventional single-agent RL agents with ARL, and show that our ARL approach leads to: 1) the emergence of risk-averse behaviour without constraints or domain-specific penalties; 2) significant improvements in performance across a set of standard metrics, evaluated with or without an adversary in the test environment, and; 3) improved robustness to model uncertainty. We empirically demonstrate that our ARL method consistently converges, and we prove for several special cases that the profiles that we converge to correspond to Nash equilibria in a simplified single-stage game.