TY - GEN
T1 - Simulation and performance evaluation of computational mobile devices strategies for data transmission and local processing in IoT systems
AU - Egwuche, Ojonukpe Sylvester
AU - Greeff, Japie
AU - Heymann, Reolyn
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - As wireless and edge computing networks become integral to the advancement of intelligent systems and automation, there is a critical need to explore efficient resource management mechanisms that can sustain optimal system performance in dynamic, mobile, and heterogeneous network environments. Smart mobile devices are characterized by limited battery power, memory, and processing ability; however, they are potential computing devices that can be harnessed to process tasks that do not require high computing power, while tasks that require intensive processing are offloaded to remote server nodes. In this study, we designed and evaluated the performance of different strategies: reinforced learning strategy (based on deep reinforcement learning), random action selection, transmission-only, and local-processing-only in managing local processing and transmission of tasks in edge computing and mobile ad hoc networks scenarios. Key performance metrics, including average rewards (throughput) and task drop rates tracked over a series of simulation frames. The results show that the reinforced learning strategy presents a more resource-efficient decision-making on either local processing of tasks or transmission of tasks than the other strategies, with minimal latency and task loss. The efficiency of the reinforced learning strategy is attributed to the optimization of the decision-making process in deep reinforcement learning (Deep Q Network) that allows the system to learn over time and adjust its actions based on task demands and available resources. This approach can be used to optimize decisions in edge computing and mobile ad hoc networks (MANETs) on resource allocation, task offloading, and network management.
AB - As wireless and edge computing networks become integral to the advancement of intelligent systems and automation, there is a critical need to explore efficient resource management mechanisms that can sustain optimal system performance in dynamic, mobile, and heterogeneous network environments. Smart mobile devices are characterized by limited battery power, memory, and processing ability; however, they are potential computing devices that can be harnessed to process tasks that do not require high computing power, while tasks that require intensive processing are offloaded to remote server nodes. In this study, we designed and evaluated the performance of different strategies: reinforced learning strategy (based on deep reinforcement learning), random action selection, transmission-only, and local-processing-only in managing local processing and transmission of tasks in edge computing and mobile ad hoc networks scenarios. Key performance metrics, including average rewards (throughput) and task drop rates tracked over a series of simulation frames. The results show that the reinforced learning strategy presents a more resource-efficient decision-making on either local processing of tasks or transmission of tasks than the other strategies, with minimal latency and task loss. The efficiency of the reinforced learning strategy is attributed to the optimization of the decision-making process in deep reinforcement learning (Deep Q Network) that allows the system to learn over time and adjust its actions based on task demands and available resources. This approach can be used to optimize decisions in edge computing and mobile ad hoc networks (MANETs) on resource allocation, task offloading, and network management.
KW - Internet of Things
KW - IoT
KW - computational mobile devices
KW - data transmission
KW - task computation
UR - https://www.scopus.com/pages/publications/105026946006
U2 - 10.1109/WiSEE57913.2025.11229846
DO - 10.1109/WiSEE57913.2025.11229846
M3 - Conference contribution
AN - SCOPUS:105026946006
T3 - 2025 IEEE International Conference on Wireless for Space and Extreme Environments, WiSEE 2025
BT - 2025 IEEE International Conference on Wireless for Space and Extreme Environments, WiSEE 2025
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2025 IEEE International Conference on Wireless for Space and Extreme Environments, WiSEE 2025
Y2 - 14 October 2025 through 16 October 2025
ER -