论文标题
为增强学习的旋转正规化
Choquet regularization for reinforcement learning
论文作者
论文摘要
我们建议\ emph {choquet正则化}来衡量和管理强化学习的探索水平(RL),并重新制定Wang等人的连续时间熵调节的RL问题。 (2020年,JMLR,21(198)),其中我们用Choquet正常器代替用于正则化的差分熵。我们通过静态上静态的 - 均值约束choquet正常器来最大程度地提高汉密尔顿(Jacobi-bellman方程),并在线性(LQ)情况下明确求解。在LQ设置下,我们为几个特定的Choquet正规化器提供了明确的最佳分布,并相反识别产生许多广泛使用的探索性采样器的Choquet正规化器,例如$ε$ - 果岭,指数,统一,均匀和高斯。
We propose \emph{Choquet regularizers} to measure and manage the level of exploration for reinforcement learning (RL), and reformulate the continuous-time entropy-regularized RL problem of Wang et al. (2020, JMLR, 21(198)) in which we replace the differential entropy used for regularization with a Choquet regularizer. We derive the Hamilton--Jacobi--Bellman equation of the problem, and solve it explicitly in the linear--quadratic (LQ) case via maximizing statically a mean--variance constrained Choquet regularizer. Under the LQ setting, we derive explicit optimal distributions for several specific Choquet regularizers, and conversely identify the Choquet regularizers that generate a number of broadly used exploratory samplers such as $ε$-greedy, exponential, uniform and Gaussian.