Conventional traffic simulators usually employ a calibrated physical car-following model to describe vehicles’ behaviour . A fixed physical model tends to be lesseffective in a complicated environment given the non-stationary nature of traffic dynamics . In this paper, we formulate traffic simulation as an inversereinforcement learning problem, and propose a parameter sharing adversarialinverse reinforcement learning model for dynamics-robust simulation learning . Our proposed model is able to imitate a vehicle’s trajectories in the realworld while recovering the reward function that reveals the vehicle’s true objective which is invariant to different dynamics . Extensive experiments on synthetic and real-world datasets show the superior performanceof our approach compared to state-of-the-art methods .

Author(s) : Guanjie Zheng, Hanyang Liu, Kai Xu, Zhenhui Li

Links : PDF - Abstract

Code :

Keywords : model - traffic - learning - simulation - dynamics -

Leave a Reply

Your email address will not be published. Required fields are marked *