论文标题

精密机器学习

Precision Machine Learning

论文作者

Michaud, Eric J., Liu, Ziming, Tegmark, Max

论文摘要

我们探讨了将ML模型拟合到很高精确度的数据中涉及的独特注意事项,这是科学应用通常所需的。我们从经验上比较各种函数近似方法,并研究它们如何通过增加参数和数据进行扩展。我们发现,通过自动发现和利用其中的模块化结构,神经网络通常可以在高维示例上表现出色的经典近似方法。但是,接受常见优化者训练的神经网络对于低维病例的功能较小,这激发了我们研究神经网络损失景观的独特特性,以及在高精度制度中产生的相应优化挑战。为了解决低维度的优化问题,我们开发了培训技巧,使我们能够训练神经网络损失极低,接近数值精度允许的限制。

We explore unique considerations involved in fitting ML models to data with very high precision, as is often required for science applications. We empirically compare various function approximation methods and study how they scale with increasing parameters and data. We find that neural networks can often outperform classical approximation methods on high-dimensional examples, by auto-discovering and exploiting modular structures therein. However, neural networks trained with common optimizers are less powerful for low-dimensional cases, which motivates us to study the unique properties of neural network loss landscapes and the corresponding optimization challenges that arise in the high precision regime. To address the optimization issue in low dimensions, we develop training tricks which enable us to train neural networks to extremely low loss, close to the limits allowed by numerical precision.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源