论文标题
风险定向在随机双重动态编程中使用隐藏的马尔可夫模型进行电网级存储的重要性采样
Risk Directed Importance Sampling in Stochastic Dual Dynamic Programming with Hidden Markov Models for Grid Level Energy Storage
论文作者
论文摘要
需要大规模整合可再生能源的电力系统必须考虑这些电源引入的高度不确定性。这可以通过许多分布式网格级存储设备的系统来完成。但是,由于资源状态的高维度和涉及高度挥发性的随机过程,在这种情况下制定具有成本效益且强大的控制政策是一个挑战。我们首先使用经过精心校准的电网模型和专门的隐藏马尔可夫随机模型来对问题进行建模,以复制越过时间。然后,我们将控制策略基于随机双动态编程的一种变体,这是一种非常适合某些高维控制问题的算法,已修改以适应随机中隐藏的Markov不确定性。但是,该算法可能不切实际,因为它表现出相对较慢的收敛性。为了加速算法,我们同时应用二次正则化和风险指导的重要性抽样技术,以在算法的向后通过的每个时间步骤中采样结果空间。我们表明,所产生的策略比使用经典SDDP建模假设和算法开发的策略更健壮。
Power systems that need to integrate renewables at a large scale must account for the high levels of uncertainty introduced by these power sources. This can be accomplished with a system of many distributed grid-level storage devices. However, developing a cost-effective and robust control policy in this setting is a challenge due to the high dimensionality of the resource state and the highly volatile stochastic processes involved. We first model the problem using a carefully calibrated power grid model and a specialized hidden Markov stochastic model for wind power which replicates crossing times. We then base our control policy on a variant of stochastic dual dynamic programming, an algorithm well suited for certain high dimensional control problems, that is modified to accommodate hidden Markov uncertainty in the stochastics. However, the algorithm may be impractical to use as it exhibits relatively slow convergence. To accelerate the algorithm, we apply both quadratic regularization and a risk-directed importance sampling technique for sampling the outcome space at each time step in the backward pass of the algorithm. We show that the resulting policies are more robust than those developed using classical SDDP modeling assumptions and algorithms.