Designing and Investigating the Effect of Single-Intermediate and 2-Intermediate Layers of Vanilla Neural Networks in Electromagnetic Path Loss Prediction

Virginia C. Ebhota, Thokozani Shongwe

Research output: Contribution to journalArticlepeer-review

Abstract

Impact of two designed architectural compositions of Vanilla Neural Networks (VNNs): the single-intermediate layer of VNN and two (2)-intermediate layers of VNN, in the prediction of signal power loss were examined in this work using measured data from a Long Term Evolution (LTE) network collected from a micro-cell built up environment. The effect of different values of learning rate hyper-parameter were also examined on the VNN architectural models. Early stopping method at the ratio of 75%:15%:15% was adopted during the network training process to avoid over-fitting and Bayesian Regularization (BR) mathematical training algorithm was employed for the training process to ensure good network generalization. Both the intermediate layer neurons for the single-intermediate layer of the VNN and 2-intermediate layers of the VNN were carefully selected to ensure adequate and robust training with excellent predictive performance. Statistical performance metrics, the Correlation Coefficient (R), the Standard Deviation (SD) and Root Mean Squared Error (RMSE) were employed for result analysis of the performances of the two VNN architectural structures while the Mean Squared Error (MSE) and Coefficient of Regression (R) were employed for examining the effect of the various selected values of learning rates on the prediction performances of the two VNN models.The overall experimental results under similar circumstances for training both the VNN models while considering their architectural structures and the effect of learning rates show that for an efficient neural network training and prediction of signal power loss, a well-trained single-intermediate layer of VNN architectural structure with an appropriate neuron numbers gives more efficient and optimal prediction results as the results of the actual output is closest to the desired output showing optimal prediction in comparison with training using 2-intermediate layers of VNN model. The best neural networks training result outputs of a single intermediate layer VNN with 52neurons in the intermediate layer gives R of 0.9728, SD of 1.2758 and RMSE of 1.7435 while best training result of the 2-intermediate layers gives R of 0.9531, SD of 1.5407 and RMSE of 2.2754 on application of [16, 20] neurons which gives the highest prediction results of the considered neuron numbers. Training the single-intermediate layer VNN with small learning rate of 0.002 shows very high R of 0.9903 while with 2-intermediate layers VNN gives R of 0.8810 The training time required for training the single-intermediate layer VNN is considerable low in comparison to training time required to train 2-intermediate layers VNN for best prediction results. These prediction results are of utmost importance in the planning, design and upgrading of wireless network for optimal performance especially when applied in similar environment to the environment of data collection as VNNs results show adaptability and robustness.

Original languageEnglish
Pages (from-to)262-276
Number of pages15
JournalInternational Journal on Communications Antenna and Propagation
Volume13
Issue number5
DOIs
Publication statusPublished - 2023

Keywords

  • Artificial Neural Networks
  • Bayesian Regularization Mathematical Training Algorithm
  • Early Stopping Method
  • Electromagnetic Signal Power Loss
  • Learning Rate
  • Vanilla Neural Networks

ASJC Scopus subject areas

  • Signal Processing
  • Media Technology
  • Instrumentation
  • Hardware and Architecture
  • Computer Networks and Communications
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'Designing and Investigating the Effect of Single-Intermediate and 2-Intermediate Layers of Vanilla Neural Networks in Electromagnetic Path Loss Prediction'. Together they form a unique fingerprint.

Cite this