Neural Network Optimization and Implications of High and Low Gradient Results in Training Vanilla and Hybrid Adaptive Neural Network Models for Effective Signal Power Loss Prediction

Virginia C. Ebhota, Thokozani Shongwe

Research output: Contribution to journalArticlepeer-review

Abstract

This work investigated the gradient performances during Neural Network (NN) model training: the Vanilla NN (VNN) model and the designed hybrid adaptive NN model using measurement datasets from a Line-of-Sight (LOS) area 1 and a Non-Line-of-Sight (NLOS) area 2 from a Long-Term Evolution (LTE) micro-cell environment. The NN model training results from tables 1 to 4 for training with the dataset from LOS Area 1 show that the gradient stopped value after the NN model training of the VNN model using the Levenberg Marquardt (LM) training algorithm gave 1.29 and trained 13 epochs, while training the same VNN model with the Bayesian Regularization (BR) training algorithm gave a gradient stopped value of 0.084 with 32 epochs. Training the NN model with a hybrid adaptive NN model using the LM training algorithm gave a stopped gradient value of 3.23 with 11 epochs, while training the same hybrid adaptive NN model using the BR training algorithm gave a stopped gradient value of 0.055 with an epoch of 20. The same pattern of results was seen when training the NN models with datasets from NLOS Area 2. The training results showed effective training of the NN with a low gradient stopped value using a designed hybrid adaptive NN model as well as the exertion of the BR training algorithm over the VNN model and the LM training algorithm. This is because the hybrid adaptive NN model combines both the merits of ADALINE and VNN models for optimal performance, and also because the BR training algorithm serves as an efficient training technique that compensates for the exaggerated amount of the network parameters during network training, thereby avoiding over-fitting.

Original languageEnglish
Pages (from-to)274-285
Number of pages12
JournalInternational Review on Modelling and Simulations
Volume16
Issue number5
DOIs
Publication statusPublished - 2023

Keywords

  • ANNs
  • Bayesian Regularization Algorithm
  • Gradient
  • Gradient Descent
  • Hybrid Adaptive Neural Network
  • Optimization
  • Vanilla Neural Network

ASJC Scopus subject areas

  • Modeling and Simulation
  • General Chemical Engineering
  • Mechanical Engineering
  • Logic
  • Discrete Mathematics and Combinatorics
  • Electrical and Electronic Engineering
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'Neural Network Optimization and Implications of High and Low Gradient Results in Training Vanilla and Hybrid Adaptive Neural Network Models for Effective Signal Power Loss Prediction'. Together they form a unique fingerprint.

Cite this