TY - JOUR
T1 - Deep reinforcement learning for traffic signal control with consistent state and reward design approach
AU - Bouktif, Salah
AU - Cheniki, Abderraouf
AU - Ouni, Ali
AU - El-Sayed, Hesham
N1 - Funding Information:
This work was supported by the to Emirates Center for Mobility Research (ECMR) of the United Arab Emirates University (grant number 31R225 ).
Publisher Copyright:
© 2023 The Author(s)
PY - 2023/5/12
Y1 - 2023/5/12
N2 - Intelligent Transportation Systems are essential due to the increased number of traffic congestion problems and challenges nowadays. Traffic Signal Control (TSC) plays a critical role in optimizing the traffic flow and mitigating the congestion within the urban areas. Various research works have been conducted to enhance the behavior of TSCs at intersections and subsequently reduce the traffic congestion. Researchers recently leveraged Deep Learning (DL) and Reinforcement Learning (RL) techniques to optimize TSCs. In RL framework, the agent interacts with surrounding world through states, rewards and actions. The formulation of these key elements is crucial as they impact the way the RL agent behaves and optimizes its policy. However, most of existing frameworks rely on hand-crafted state and reward designs, restricting the RL agent from acting optimally. In this paper, we propose a novel approach to better formulate state and reward definitions in order to boost the performance of the traffic signal controller agent. The intuitive idea is to define both state and reward in a consistent and straightforward manner. We advocate that such a design approach helps achieving training stability and hence provides a rapid convergence to derive best policies. We consider the double deep Q-Network (DDQN) along with prioritized experience replay (PER) for the agent architecture. To evaluate the performance of our approach, we conduct series of simulations using the Simulation of Urban MObility (SUMO) environment. The statistical analysis of our results show that the performance of our proposal outperforms the state-of-the-art state and reward design approaches.
AB - Intelligent Transportation Systems are essential due to the increased number of traffic congestion problems and challenges nowadays. Traffic Signal Control (TSC) plays a critical role in optimizing the traffic flow and mitigating the congestion within the urban areas. Various research works have been conducted to enhance the behavior of TSCs at intersections and subsequently reduce the traffic congestion. Researchers recently leveraged Deep Learning (DL) and Reinforcement Learning (RL) techniques to optimize TSCs. In RL framework, the agent interacts with surrounding world through states, rewards and actions. The formulation of these key elements is crucial as they impact the way the RL agent behaves and optimizes its policy. However, most of existing frameworks rely on hand-crafted state and reward designs, restricting the RL agent from acting optimally. In this paper, we propose a novel approach to better formulate state and reward definitions in order to boost the performance of the traffic signal controller agent. The intuitive idea is to define both state and reward in a consistent and straightforward manner. We advocate that such a design approach helps achieving training stability and hence provides a rapid convergence to derive best policies. We consider the double deep Q-Network (DDQN) along with prioritized experience replay (PER) for the agent architecture. To evaluate the performance of our approach, we conduct series of simulations using the Simulation of Urban MObility (SUMO) environment. The statistical analysis of our results show that the performance of our proposal outperforms the state-of-the-art state and reward design approaches.
KW - Double deep Q-Network
KW - Reinforcement learning
KW - Traffic optimization
KW - Traffic signal control
UR - http://www.scopus.com/inward/record.url?scp=85150069211&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85150069211&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2023.110440
DO - 10.1016/j.knosys.2023.110440
M3 - Article
AN - SCOPUS:85150069211
SN - 0950-7051
VL - 267
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 110440
ER -