论文标题

神经网络和有风险的价值

Neural Networks and Value at Risk

论文作者

Arimond, Alexander, Borth, Damian, Hoepner, Andreas, Klawunn, Michael, Weisheit, Stefan

论文摘要

利用生成状态转换框架,我们对在风险阈值估计下的价值进行资产回报进行蒙特卡洛模拟。我们将股票市场和长期债券作为全球,美国,欧元区和英国的测试资产,在2018年8月结束的最多1,250周的样本范围内,沿着三个设计步骤(i)与神经网络的初始化相关的(i)的神经网络(ii)的功能,根据培训和(iii)的数量,我们的数据量相关的功能。首先,我们将神经网络与随机播种与网络进行比较,这些网络是通过最佳模型(即隐藏的马尔可夫)估算来初始化的。我们发现后者在VAR漏洞的频率方面表现出色(即已实现的回报率降至估计的VAR阈值)。其次,我们通过在培训指令中添加第二个目标来平衡网络损失函数的激励结构,以便神经网络优化准确性,同时还旨在保持经验现实的政权分布(即公牛与熊市的频率)。特别是该设计功能使平衡的激励复发神经网络(RNN)能够以统计和经济上的显着层次的水平优于单个激励RNN以及任何其他神经网络或任何其他神经网络或建立的方法。第三,我们一半的培训数据集为2,000天。我们发现我们的网络在馈送大幅较少的数据(即1,000天)时的性能明显更糟,这突出了神经网络对非常大的数据集的至关重要的弱点...

Utilizing a generative regime switching framework, we perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation. Using equity markets and long term bonds as test assets in the global, US, Euro area and UK setting over an up to 1,250 weeks sample horizon ending in August 2018, we investigate neural networks along three design steps relating (i) to the initialization of the neural network, (ii) its incentive function according to which it has been trained and (iii) the amount of data we feed. First, we compare neural networks with random seeding with networks that are initialized via estimations from the best-established model (i.e. the Hidden Markov). We find latter to outperform in terms of the frequency of VaR breaches (i.e. the realized return falling short of the estimated VaR threshold). Second, we balance the incentive structure of the loss function of our networks by adding a second objective to the training instructions so that the neural networks optimize for accuracy while also aiming to stay in empirically realistic regime distributions (i.e. bull vs. bear market frequencies). In particular this design feature enables the balanced incentive recurrent neural network (RNN) to outperform the single incentive RNN as well as any other neural network or established approach by statistically and economically significant levels. Third, we half our training data set of 2,000 days. We find our networks when fed with substantially less data (i.e. 1,000 days) to perform significantly worse which highlights a crucial weakness of neural networks in their dependence on very large data sets ...

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源