TY - JOUR
T1 - A Comprehensive Review of Deep Learning
T2 - Architectures, Recent Advances, and Applications
AU - Mienye, Ibomoiye Domor
AU - Swart, Theo G.
N1 - Publisher Copyright:
© 2024 by the authors.
PY - 2024/12
Y1 - 2024/12
N2 - Deep learning (DL) has become a core component of modern artificial intelligence (AI), driving significant advancements across diverse fields by facilitating the analysis of complex systems, from protein folding in biology to molecular discovery in chemistry and particle interactions in physics. However, the field of deep learning is constantly evolving, with recent innovations in both architectures and applications. Therefore, this paper provides a comprehensive review of recent DL advances, covering the evolution and applications of foundational models like convolutional neural networks (CNNs) and Recurrent Neural Networks (RNNs), as well as recent architectures such as transformers, generative adversarial networks (GANs), capsule networks, and graph neural networks (GNNs). Additionally, the paper discusses novel training techniques, including self-supervised learning, federated learning, and deep reinforcement learning, which further enhance the capabilities of deep learning models. By synthesizing recent developments and identifying current challenges, this paper provides insights into the state of the art and future directions of DL research, offering valuable guidance for both researchers and industry experts.
AB - Deep learning (DL) has become a core component of modern artificial intelligence (AI), driving significant advancements across diverse fields by facilitating the analysis of complex systems, from protein folding in biology to molecular discovery in chemistry and particle interactions in physics. However, the field of deep learning is constantly evolving, with recent innovations in both architectures and applications. Therefore, this paper provides a comprehensive review of recent DL advances, covering the evolution and applications of foundational models like convolutional neural networks (CNNs) and Recurrent Neural Networks (RNNs), as well as recent architectures such as transformers, generative adversarial networks (GANs), capsule networks, and graph neural networks (GNNs). Additionally, the paper discusses novel training techniques, including self-supervised learning, federated learning, and deep reinforcement learning, which further enhance the capabilities of deep learning models. By synthesizing recent developments and identifying current challenges, this paper provides insights into the state of the art and future directions of DL research, offering valuable guidance for both researchers and industry experts.
KW - deep learning
KW - GAN
KW - GRU
KW - LLM
KW - LSTM
KW - machine learning
KW - NLP
KW - transformers
UR - http://www.scopus.com/inward/record.url?scp=85213446313&partnerID=8YFLogxK
U2 - 10.3390/info15120755
DO - 10.3390/info15120755
M3 - Article
AN - SCOPUS:85213446313
SN - 2078-2489
VL - 15
JO - Information (Switzerland)
JF - Information (Switzerland)
IS - 12
M1 - 755
ER -