Abstract
Ensemble learning techniques have achieved state-of-the-art performance in diverse machine learning applications by combining the predictions from two or more base models. This paper presents a concise overview of ensemble learning, covering the three main ensemble methods: bagging, boosting, and stacking, their early development to the recent state-of-the-art algorithms. The study focuses on the widely used ensemble algorithms, including random forest, adaptive boosting (AdaBoost), gradient boosting, extreme gradient boosting (XGBoost), light gradient boosting machine (LightGBM), and categorical boosting (CatBoost). An attempt is made to concisely cover their mathematical and algorithmic representations, which is lacking in the existing literature and would be beneficial to machine learning researchers and practitioners.
Original language | English |
---|---|
Pages (from-to) | 99129-99149 |
Number of pages | 21 |
Journal | IEEE Access |
Volume | 10 |
DOIs | |
Publication status | Published - 2022 |
Keywords
- Algorithms
- classification
- ensemble learning
- fraud detection
- machine learning
- medical diagnosis
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering
- Electrical and Electronic Engineering