--

13(2)2023

Latency-aware task offloading in UAV-assisted edge computing: Leveraging deep reinforcement learning


Author - Affiliation:
Nguyen Tri Hai - Seoul National University of Science and Technology
Corresponding author: Nguyen Tri Hai - haint93@seoultech.ac.kr
Submitted: 17-08-2023
Accepted: 07-09-2023
Published: 31-10-2023

Abstract
Due to the exponential expansion of the Internet of Things (IoT) technology, the increasing amount of data generated by numerous IoT devices has become a significant challenge for communication and computing networks. On the other hand, the concept of Mobile Edge Computing (MEC) presents a viable solution as it brings computing, storage, and networking capabilities in close proximity to the user, enabling the hosting of applications that require significant computation power and low latency right at the network’s edge. In addition, the maneuverability, cost-effectiveness, and ease of deployment make Unmanned Aerial Vehicles (UAVs) highly versatile wireless platforms. Capitalizing on the benefits of MEC and UAVs, this paper proposes a UAV-assisted MEC system that offloads services to reduce the computation burden in IoT. The objective is to maximize the task completion rate, considering factors such as user transmit power, task offloading rate, and UAV trajectory variables. To tackle this optimization problem, this paper devises a method based on the Deep Deterministic Policy Gradient (DDPG), an algorithm for continuous action spaces in deep reinforcement learning. The numerical simulations show the effectiveness of the proposed approach in comparison to other benchmarks, i.e., deep Q-network (DQN) and full offloading methods. In a simulation, the proposed DDPG method can achieve a fully optimized system with a 100% task success rate, while the DQN-based method achieves a lower task success rate of 95.93% and the full offloading method only obtains a task success rate of 82.73%.

Keywords
deep reinforcement learning; latency; optimization; task offloading; unmanned aerial vehicle

Full Text:
PDF

Cite this paper as:

Nguyen, H. T. (2023). Latency-aware task offloading in UAV-assisted edge computing: Leveraging deep reinforcement learning. Ho Chi Minh City Open University Journal of Science – Engineering and Technology, 13(2), 69-79. doi:10.46223/HCMCOUJS.tech.en.13.2.2910.2023


References

Cheng, N., Lyu, F., Quan, W., Zhou, C., He, H., Shi, W., & Shen, X. (2019). Space/aerial-assisted computing offloading for IoT applications: A learning-based approach. IEEE Journal on Selected Areas in Communications37(5), 1117-1129. doi:10.1109/JSAC.2019.2906789


Dao, N. N., Pham, V. Q., Ngo, T. H., Tran, T. T., Vo, B. N. Q., Lakew, D. S., & Cho, S. (2021). Survey on aerial radio access networks: Toward a comprehensive 6G access infrastructure. IEEE Communications Surveys & Tutorials23(2), 1193-1225. doi:10.1109/COMST.2021.3059644


Huda, S. A., & Moh, S. (2022). Survey on computation offloading in UAV-Enabled mobile edge computing. Journal of Network and Computer Applications201, Article 103341. doi:10.1016/j.jnca.2022.103341


Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., ... Wierstra, D. (2016). Continuous control with deep reinforcement learning. Paper presented at the 4th International Conference on Learning Representations (ICLR 2016), San Juan, Puerto Rico.


Liu, Y., Qin, Z., Elkashlan, M., Ding, Z., Nallanathan, A., & Hanzo, L. (2017). Nonorthogonal multiple access for 5G and beyond. Proceedings of the IEEE105(12), 2347-2381. doi:10.1109/JPROC.2017.2768666


Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., ... Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature518(7540), 529-533. doi:10.1038/nature14236


Nguyen, H. T., & Park, L. (2022). A survey on deep reinforcement learning-driven task offloading in aerial access networks. Paper presented at the 13th International Conference on Information and Communication Technology Convergence (ICTC 2022), Jeju Island, Republic of Korea. doi:10.1109/ICTC55196.2022.9952687


Nguyen, H. T., & Park, L. (2023). HAP-Assisted RSMA-enabled vehicular edge computing: A DRL-based optimization framework. Mathematics11(10), Article 2376. doi:10.3390/math11102376


Nguyen, H. T., Truong, P. T., Dao, N. N., Na, W., Park, H., & Park, L. (2022). Deep reinforcement learning-based partial task offloading in high altitude platform-aided vehicular networks. Paper presented at the 13th International Conference on Information and Communication Technology Convergence (ICTC 2022), Jeju Island, Republic of Korea. doi:10.1109/ICTC55196.2022.9952890


Sun, C., Ni, W., & Wang, X. (2021). Joint computation offloading and trajectory planning for UAV-assisted edge computing. IEEE Transactions on Wireless Communications20(8), 5343-5358. doi:10.1109/TWC.2021.3067163


Truong, P. T., Dao, N. N., & Cho, S. (2022). HAMEC-RSMA: Enhanced aerial computing systems with rate splitting multiple access. IEEE Access, 10, 52398-52409. doi:10.1109/ACCESS.2022.3173125


Wang, H., Zhang, H., Liu, X., Long, K., & Nallanathan, A. (2022). Joint UAV placement optimization, resource allocation, and computation offloading for THz band: A DRL approach. IEEE Transactions on Wireless Communications, 22(7), 4890-4900. doi:10.1109/TWC.2022.3230407


Wang, Y., Fang, W., Ding, Y., & Xiong, N. (2021). Computation offloading optimization for UAV-assisted mobile edge computing: A deep deterministic policy gradient approach. Wireless Networks27(4), 2991-3006. doi:10.1007/s11276-021-02632-z


Yang, Z., Bi, S., & Zhang, Y. J. A. (2021). Dynamic trajectory and offloading control of UAV-enabled MEC under user mobility. Paper presented at the 2021 IEEE International Conference on Communications Workshops (ICC Workshops 2021), Montreal, QC, Canada. doi:10.1109/ICCWorkshops50388.2021.9473504


Zhu, S., Gui, L., Zhao, D., Cheng, N., Zhang, Q., & Lang, X. (2021). Learning-based computation offloading approaches in UAVs-assisted edge computing. IEEE Transactions on Vehicular Technology70(1), 928-944. doi:10.1109/TVT.2020.3048938



Creative Commons License
© The Author(s) 2023. This is an open access publication under CC BY NC licence.