TY - JOUR
T1 - A novel Q-learning-based routing scheme using an intelligent filtering algorithm for flying ad hoc networks (FANETs)
AU - Hosseinzadeh, Mehdi
AU - Ali, Saqib
AU - Ionescu-Feleaga, Liliana
AU - Ionescu, Bogdan Stefan
AU - Yousefpoor, Mohammad Sadegh
AU - Yousefpoor, Efat
AU - Ahmed, Omed Hassan
AU - Rahmani, Amir Masoud
AU - Mehmood, Asif
N1 - Publisher Copyright:
© 2023 The Author(s)
PY - 2023/12/1
Y1 - 2023/12/1
N2 - The flying ad hoc network (FANET) is an emerging network focused on unmanned aerial vehicles (UAVs) that has attracted the attention of researchers around the world. Due to the cooperation between UAVs in this network, data transfer between these UAVs is very essential. Routing protocols must determine how to make routing paths for each UAV with others in a wireless ad hoc network to facilitate the data transmission between UAVs. Nowadays, reinforcement learning (RL), especially Q-learning, is an effective response for solving existing challenges in the routing approaches and adding features such as autonomous, self-adaptive, and self-learning to these approaches. In this paper, Q-learning is used to enhance and increase network performance, and a Q-learning-based routing method using an intelligent filtering algorithm called QRF is presented for FANETs. The main innovation in this paper is that QRF manages the size of the state space using the proposed filtering algorithm. This will increase the convergence rate of the Q-learning-based routing algorithm. On the other hand, QRF regulates the learning parameters related to Q-learning so that this scheme is better adapted to the FANET environment. In the last step, the network simulator version 2 (NS2) is employed to execute the simulation process related to QRF. In this process, five evaluation criteria, namely energy consumption, packet delivery rate, overhead, end-to-end delay, and network longevity are evaluated, and the results obtained from QRF are compared with those of QFAN, QTAR, and QGeo. The simulation results in this paper show that QRF makes a balanced energy distribution between UAVs and thus extends the network longevity. Moreover, the intelligent filtering algorithm designed in QRF has reduced delay in the routing process but is associated with communication overhead.
AB - The flying ad hoc network (FANET) is an emerging network focused on unmanned aerial vehicles (UAVs) that has attracted the attention of researchers around the world. Due to the cooperation between UAVs in this network, data transfer between these UAVs is very essential. Routing protocols must determine how to make routing paths for each UAV with others in a wireless ad hoc network to facilitate the data transmission between UAVs. Nowadays, reinforcement learning (RL), especially Q-learning, is an effective response for solving existing challenges in the routing approaches and adding features such as autonomous, self-adaptive, and self-learning to these approaches. In this paper, Q-learning is used to enhance and increase network performance, and a Q-learning-based routing method using an intelligent filtering algorithm called QRF is presented for FANETs. The main innovation in this paper is that QRF manages the size of the state space using the proposed filtering algorithm. This will increase the convergence rate of the Q-learning-based routing algorithm. On the other hand, QRF regulates the learning parameters related to Q-learning so that this scheme is better adapted to the FANET environment. In the last step, the network simulator version 2 (NS2) is employed to execute the simulation process related to QRF. In this process, five evaluation criteria, namely energy consumption, packet delivery rate, overhead, end-to-end delay, and network longevity are evaluated, and the results obtained from QRF are compared with those of QFAN, QTAR, and QGeo. The simulation results in this paper show that QRF makes a balanced energy distribution between UAVs and thus extends the network longevity. Moreover, the intelligent filtering algorithm designed in QRF has reduced delay in the routing process but is associated with communication overhead.
KW - Flying ad hoc networks (FANETs)
KW - Q-learning
KW - Reinforcement learning (RL)
KW - Routing
KW - Unmanned aerial vehicles (UAVs)
UR - http://www.scopus.com/inward/record.url?scp=85178943385&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85178943385&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/14bbd3d4-0887-3759-a06a-2b4a33f123e0/
U2 - 10.1016/j.jksuci.2023.101817
DO - 10.1016/j.jksuci.2023.101817
M3 - Article
AN - SCOPUS:85178943385
SN - 1319-1578
VL - 35
JO - Journal of King Saud University - Computer and Information Sciences
JF - Journal of King Saud University - Computer and Information Sciences
IS - 10
M1 - 101817
ER -