Ensemble Large Language Models: A Survey

Research output: Contribution to journalReview articlepeer-review

4 Citations (Scopus)

Abstract

Large language models (LLMs) have transformed the field of natural language processing (NLP), achieving state-of-the-art performance in tasks such as translation, summarization, and reasoning. Despite their impressive capabilities, challenges persist, including biases, limited interpretability, and resource-intensive training. Ensemble learning, a technique that combines multiple models to improve performance, presents a promising avenue for addressing these limitations in LLMs. This review explores the emerging field of ensemble LLMs, providing a comprehensive analysis of current methodologies, applications across diverse domains, and existing challenges. By reviewing ensemble strategies and evaluating their effectiveness, this paper highlights the potential of ensemble LLMs to enhance robustness and generalizability while proposing future research directions to advance the field.

Original languageEnglish
Article number688
JournalInformation (Switzerland)
Volume16
Issue number8
DOIs
Publication statusPublished - Aug 2025

Keywords

  • GPT
  • LLMs
  • NLP
  • ensemble learning
  • transformers

ASJC Scopus subject areas

  • Information Systems

Fingerprint

Dive into the research topics of 'Ensemble Large Language Models: A Survey'. Together they form a unique fingerprint.

Cite this