Practical trade-offs in neural network optimization: brute force search and gradient descent

Tobiloba Emmanuel Somefun, Timilehin Owolabi, Omowunmi Mary Longe

Research output: Contribution to journalArticlepeer-review

Abstract

Choosing the correct optimization method is vital in the dynamic environment of neural networks. Two approaches namely gradient descent optimization and brute force search are closely examined in this work. Whereas gradient descent uses gradients to advance toward optimal solutions iteratively, brute force is all about exhaustively checking every conceivable parameter setup. Some interesting findings were obtained by training classification (Iris dataset) and regression (Diabetes dataset) datasets. Brute force optimization proved its thoroughness by achieving reduced loss and more precision, therefore demonstrating its strength. Though it took more time to train, it also shockingly utilized less memory, which makes it appealing for environments with restricted resources. Conversely, gradient descent accomplished faster convergence in the same period. Its ultimate results suffered, though, as this speed consumed more memory and occasionally settled into local minima. Notwithstanding these trade-offs, its balance of speed and scalability qualifies it as a great candidate for time-sensitive applications or large-scale neural networks. The results imply no one-size-fits-all fix is possible. Different needs are satisfied by the thoroughness of brute force and the efficiency of gradient descent, which suggests hybrid approaches combining their advantages. Investigating these options will be essential to release new degrees of optimization as neural networks develop.

Original languageEnglish
Article number025203
JournalEngineering Research Express
Volume7
Issue number2
DOIs
Publication statusPublished - 30 Jun 2025

Keywords

  • brute force optimization
  • comparison
  • gradient descent
  • neural networks

ASJC Scopus subject areas

  • General Engineering

Fingerprint

Dive into the research topics of 'Practical trade-offs in neural network optimization: brute force search and gradient descent'. Together they form a unique fingerprint.

Cite this