Constrained Multi Agent Traffic Control

Deep RL | Fall 2021 | Prof. Rose Yu

Population growth and densification of urban neighbourhoods is leading to traffic increasingly becoming a blocking problem in real life. Adaptive traffic signal control (ATSC) solutions attempt to tackle road congestion by modeling stoplight behavior policies based on real time traffic data. Initial works relied on heuristics and domain specific information to create stoplight policies, however, collecting such information for larger road maps is time exhaustive, expensive and lacks feasibility.

Recently, reinforcement learning (RL) has been applied to the task. RL algorithms do not rely on heuristics and are capable of learning optimal policies by directly fitting a parametric model on the input space, making them provably useful in the ATSC domain. However, these policies by default do not take into account real world constraints to ensure tractability and safety.

Designing agents with safe policies would make them deployable for traffic signal control in the real world. Our work is focused on incorporating both hard and soft constraints into the learning of the RL agent to ensure safety and allow configurability.

Links

Collaborators

Ulyana Tkachenko, Rohin Garg