Abstract
Large language models (LLMs) have transformed the field of natural language processing (NLP), achieving state-of-the-art performance in tasks such as translation, summarization, and reasoning. Despite their impressive capabilities, challenges persist, including biases, limited interpretability, and resource-intensive training. Ensemble learning, a technique that combines multiple models to improve performance, presents a promising avenue for addressing these limitations in LLMs. This review explores the emerging field of ensemble LLMs, providing a comprehensive analysis of current methodologies, applications across diverse domains, and existing challenges. By reviewing ensemble strategies and evaluating their effectiveness, this paper highlights the potential of ensemble LLMs to enhance robustness and generalizability while proposing future research directions to advance the field.
| Original language | English |
|---|---|
| Article number | 688 |
| Journal | Information (Switzerland) |
| Volume | 16 |
| Issue number | 8 |
| DOIs | |
| Publication status | Published - Aug 2025 |
Keywords
- GPT
- LLMs
- NLP
- ensemble learning
- transformers
ASJC Scopus subject areas
- Information Systems
Fingerprint
Dive into the research topics of 'Ensemble Large Language Models: A Survey'. Together they form a unique fingerprint.Press/Media
-
University of Johannesburg Researchers Have Provided New Data on Information Technology (Ensemble Large Language Models: A Survey)
10/09/25
1 item of Media coverage
Press/Media