论文标题

负责任地修剪

Prune Responsibly

论文作者

Paganini, Michela

论文摘要

不论机器学习应用中公平性的具体定义如何,修剪基础模型会影响它。我们调查并记录了不良的每层绩效失衡,跨任务和体系结构的出现和加剧,对于经过修剪过程的超过100k图像分类模型中考虑的近一百万类,我们表明需要透明的报告,包括偏见,公平性,公平性和纳入围绕Neural Neturnet Neturn Neturnal Neturn Neturn Neturn of Neural Neturn of Neural Neturn of Neural Neturn of Neural Neturn of Neural Netural围绕Neural Netural Makesermainerneal Netural Making prun of Neural Neturn of Neural Neturn prun trun trun。为了响应呼吁对AI模型进行定量评估的呼吁,我们将神经网络修剪作为一个有形的应用程序领域,在这种情况下,准确性效率折衷的方式不成比例地影响了代表性不足或异常值的群体。我们提供了一个简单的基于帕累托的框架,可以将公平考虑因素插入基于价值的操作点选择过程中,并重新评估修剪技术的选择。

Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it. We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification models that undergo a pruning process.We demonstrate the need for transparent reporting, inclusive of bias, fairness, and inclusion metrics, in real-life engineering decision-making around neural network pruning. In response to the calls for quantitative evaluation of AI models to be population-aware, we present neural network pruning as a tangible application domain where the ways in which accuracy-efficiency trade-offs disproportionately affect underrepresented or outlier groups have historically been overlooked. We provide a simple, Pareto-based framework to insert fairness considerations into value-based operating point selection processes, and to re-evaluate pruning technique choices.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源