论文标题

进化包装进行合奏学习

Evolutionary bagging for ensemble learning

论文作者

Ngo, Giang, Beard, Rodney, Chandra, Rohitash

论文摘要

合奏学习在机器学习方面取得了成功,其优势比其他学习方法获得了成功。袋装是一种突出的合奏学习方法,它创建了被称为袋子的数据子组,该数据被单独的机器学习方法(例如决策树)培训。随机森林是学习过程中具有其他功能的袋装的重要例子。进化算法对于优化问题很突出,还用于机器学习。进化算法是无梯度的方法,可与许多候选解决方案一起使用,这些解决方案可维持创建新解决方案的多样性。在传统的包装合奏学习中,制作了一次袋子,而在培训示例方面,内容是在学习过程中固定的。在我们的论文中,我们提出了进化装袋的合奏学习,我们利用进化算法来发展袋子的内容,以通过在袋子中提供多样性来迭代地增强合奏。结果表明,在某些限制下,我们的进化集合装袋方法优于几个基准数据集的常规集合方法(包装和随机森林)。我们发现,进化装袋可以固有地维持一套不同的袋子,而不会降低性能准确性。

Ensemble learning has gained success in machine learning with major advantages over other learning methods. Bagging is a prominent ensemble learning method that creates subgroups of data, known as bags, that are trained by individual machine learning methods such as decision trees. Random forest is a prominent example of bagging with additional features in the learning process. Evolutionary algorithms have been prominent for optimisation problems and also been used for machine learning. Evolutionary algorithms are gradient-free methods that work with a population of candidate solutions that maintain diversity for creating new solutions. In conventional bagged ensemble learning, the bags are created once and the content, in terms of the training examples, are fixed over the learning process. In our paper, we propose evolutionary bagged ensemble learning, where we utilise evolutionary algorithms to evolve the content of the bags in order to iteratively enhance the ensemble by providing diversity in the bags. The results show that our evolutionary ensemble bagging method outperforms conventional ensemble methods (bagging and random forests) for several benchmark datasets under certain constraints. We find that evolutionary bagging can inherently sustain a diverse set of bags without reduction in performance accuracy.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源