TY - GEN
T1 - WPT-enabled Multi-UAV Path Planning for Disaster Management Deep Q-Network
AU - Merabet, Adel
AU - Lakas, Abderrahmane
AU - Belkacem, Abdelkader Nasreddine
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Unmanned aerial vehicles (UAVs) have been more prevalent over the past several years with the intent to be widely deployed in many industries, including agriculture, cinematography, healthcare, delivery, and disaster management missions due to their ability to provide real-time situational awareness. However, various limitations such as the battery capacity, the charging method, and the flying range make it difficult for most applications to carry out routine tasks in vast areas. In this paper, a deep reinforcement learning (DRL) method for multi-UAV path planning that considers a cooperative action amongst UAVs in which they share the next destination to avoid visiting the same location at the same time. The Deep Q-Network algorithm (DQN) enables UAVs to autonomously plan their fastest path and ensure the continuity of the mission by deciding when to schedule a visit to a charging station or a data collection point. An objective function with a tailored reward is designed to maintain the stability of the model and ensure the quick convergence of the model. Lastly, the proposed strategy has been demonstrated by the experiments on different scenarios and showed its effectiveness in ensuring the continuity of the mission with a fastest path possible.
AB - Unmanned aerial vehicles (UAVs) have been more prevalent over the past several years with the intent to be widely deployed in many industries, including agriculture, cinematography, healthcare, delivery, and disaster management missions due to their ability to provide real-time situational awareness. However, various limitations such as the battery capacity, the charging method, and the flying range make it difficult for most applications to carry out routine tasks in vast areas. In this paper, a deep reinforcement learning (DRL) method for multi-UAV path planning that considers a cooperative action amongst UAVs in which they share the next destination to avoid visiting the same location at the same time. The Deep Q-Network algorithm (DQN) enables UAVs to autonomously plan their fastest path and ensure the continuity of the mission by deciding when to schedule a visit to a charging station or a data collection point. An objective function with a tailored reward is designed to maintain the stability of the model and ensure the quick convergence of the model. Lastly, the proposed strategy has been demonstrated by the experiments on different scenarios and showed its effectiveness in ensuring the continuity of the mission with a fastest path possible.
KW - aerial data collection
KW - deep reinforcement learning
KW - smart health.
KW - unmanned aerial vehicles
KW - wireless power transfer
KW - wireless sensor networks
UR - http://www.scopus.com/inward/record.url?scp=85167733531&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85167733531&partnerID=8YFLogxK
U2 - 10.1109/IWCMC58020.2023.10183018
DO - 10.1109/IWCMC58020.2023.10183018
M3 - Conference contribution
AN - SCOPUS:85167733531
T3 - 2023 International Wireless Communications and Mobile Computing, IWCMC 2023
SP - 1672
EP - 1678
BT - 2023 International Wireless Communications and Mobile Computing, IWCMC 2023
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 19th IEEE International Wireless Communications and Mobile Computing Conference, IWCMC 2023
Y2 - 19 June 2023 through 23 June 2023
ER -