RL-Based Adaptive Task-Offloading in Mobile-Edge Computing for IoT Networks

  • Ziad Qais Al-Abbasi
  • , Khaled M. Rabie
  • , Xingwang Li
  • , Wali Ullah Khan
  • , Asma Abu Samah

Research output: Contribution to journalArticlepeer-review

Abstract

The Internet of Things (IoT) has been increasingly used in our daily lives and in industry. However, due to limitations in computing and power capabilities, IoT devices that need to send their respective tasks to the cloud service stations, which are usually located at far distances. Having to transmit data over long distances presents challenges for services that require low latency, such as industrial control in factories, as well as autonomous driving assisted by artificial intelligence. To solve this issue, mobile edge computing (MEC) is deployed at the network’s edge to reduce transmission time. This study proposes a new offloading scheme for MEC-assisted ultra-dense cellular networks using reinforcement learning (RL) techniques. The RL algorithm learns from the historical data of the network and adapts the offloading decisions to optimize the overall performance of the network. Non-orthogonal multiple access is also adopted to improve resource utilization among IoT devices. The simulation results demonstrate that the proposed scheme outperforms other state-of-the-art offloading algorithms in terms of energy efficiency, network throughput, and user satisfaction.

Original languageEnglish
JournalIEEE Internet of Things Magazine
DOIs
Publication statusAccepted/In press - 2025
Externally publishedYes

Keywords

  • Internet of Things (IoT)
  • mobile-edge computing (MEC)
  • reinforced learning (RL)
  • task-offloading

ASJC Scopus subject areas

  • Software
  • Information Systems
  • Hardware and Architecture
  • Computer Science Applications
  • Computer Networks and Communications

Fingerprint

Dive into the research topics of 'RL-Based Adaptive Task-Offloading in Mobile-Edge Computing for IoT Networks'. Together they form a unique fingerprint.

Cite this