TY - JOUR
T1 - Practical trade-offs in neural network optimization
T2 - brute force search and gradient descent
AU - Somefun, Tobiloba Emmanuel
AU - Owolabi, Timilehin
AU - Longe, Omowunmi Mary
N1 - Publisher Copyright:
© 2025 The Author(s). Published by IOP Publishing Ltd.
PY - 2025/6/30
Y1 - 2025/6/30
N2 - Choosing the correct optimization method is vital in the dynamic environment of neural networks. Two approaches namely gradient descent optimization and brute force search are closely examined in this work. Whereas gradient descent uses gradients to advance toward optimal solutions iteratively, brute force is all about exhaustively checking every conceivable parameter setup. Some interesting findings were obtained by training classification (Iris dataset) and regression (Diabetes dataset) datasets. Brute force optimization proved its thoroughness by achieving reduced loss and more precision, therefore demonstrating its strength. Though it took more time to train, it also shockingly utilized less memory, which makes it appealing for environments with restricted resources. Conversely, gradient descent accomplished faster convergence in the same period. Its ultimate results suffered, though, as this speed consumed more memory and occasionally settled into local minima. Notwithstanding these trade-offs, its balance of speed and scalability qualifies it as a great candidate for time-sensitive applications or large-scale neural networks. The results imply no one-size-fits-all fix is possible. Different needs are satisfied by the thoroughness of brute force and the efficiency of gradient descent, which suggests hybrid approaches combining their advantages. Investigating these options will be essential to release new degrees of optimization as neural networks develop.
AB - Choosing the correct optimization method is vital in the dynamic environment of neural networks. Two approaches namely gradient descent optimization and brute force search are closely examined in this work. Whereas gradient descent uses gradients to advance toward optimal solutions iteratively, brute force is all about exhaustively checking every conceivable parameter setup. Some interesting findings were obtained by training classification (Iris dataset) and regression (Diabetes dataset) datasets. Brute force optimization proved its thoroughness by achieving reduced loss and more precision, therefore demonstrating its strength. Though it took more time to train, it also shockingly utilized less memory, which makes it appealing for environments with restricted resources. Conversely, gradient descent accomplished faster convergence in the same period. Its ultimate results suffered, though, as this speed consumed more memory and occasionally settled into local minima. Notwithstanding these trade-offs, its balance of speed and scalability qualifies it as a great candidate for time-sensitive applications or large-scale neural networks. The results imply no one-size-fits-all fix is possible. Different needs are satisfied by the thoroughness of brute force and the efficiency of gradient descent, which suggests hybrid approaches combining their advantages. Investigating these options will be essential to release new degrees of optimization as neural networks develop.
KW - brute force optimization
KW - comparison
KW - gradient descent
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=105002405223&partnerID=8YFLogxK
U2 - 10.1088/2631-8695/adc5de
DO - 10.1088/2631-8695/adc5de
M3 - Article
AN - SCOPUS:105002405223
SN - 2631-8695
VL - 7
JO - Engineering Research Express
JF - Engineering Research Express
IS - 2
M1 - 025203
ER -