论文标题
联合的离线加固学习
Federated Offline Reinforcement Learning
论文作者
论文摘要
基于循证或数据驱动的动态治疗方案对于个性化医学至关重要,这可以受益于离线增强学习(RL)。尽管可以在医疗机构之间获得大量医疗保健数据,但由于隐私限制,它们被禁止共享。此外,异质性存在于不同的地点。结果,需要联合的离线RL算法,并且有望解决这些问题。在本文中,我们提出了一个多站点的马尔可夫决策过程模型,该模型允许跨站点均具有同质和异质效应。提出的模型使对站点级特征的分析成为可能。我们设计了具有样本复杂性的离线RL的第一个联合政策优化算法。提出的算法是通信效率的,它仅需要通过交换摘要统计信息进行一轮通信交互。我们为所提出的算法提供了理论保证,在这种算法中,学到的策略的次优性与速率相当,就好像没有分布数据一样。广泛的模拟证明了所提出的算法的有效性。该方法应用于多个站点的败血症数据集,以说明其在临床环境中的使用。
Evidence-based or data-driven dynamic treatment regimes are essential for personalized medicine, which can benefit from offline reinforcement learning (RL). Although massive healthcare data are available across medical institutions, they are prohibited from sharing due to privacy constraints. Besides, heterogeneity exists in different sites. As a result, federated offline RL algorithms are necessary and promising to deal with the problems. In this paper, we propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites. The proposed model makes the analysis of the site-level features possible. We design the first federated policy optimization algorithm for offline RL with sample complexity. The proposed algorithm is communication-efficient, which requires only a single round of communication interaction by exchanging summary statistics. We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed. Extensive simulations demonstrate the effectiveness of the proposed algorithm. The method is applied to a sepsis dataset in multiple sites to illustrate its use in clinical settings.