Conventional traffic simulators usually employ a calibrated physical car-following model to describe vehicles’ behaviour . A fixed physical model tends to be lesseffective in a complicated environment given the non-stationary nature of traffic dynamics . In this paper, we formulate traffic simulation as an inversereinforcement learning problem, and propose a parameter sharing adversarialinverse reinforcement learning model for dynamics-robust simulation learning . Our proposed model is able to imitate a vehicle’s trajectories in the realworld while recovering the reward function that reveals the vehicle’s true objective which is invariant to different dynamics . Extensive experiments on synthetic and real-world datasets show the superior performanceof our approach compared to state-of-the-art methods .
Author(s) : Guanjie Zheng, Hanyang Liu, Kai Xu, Zhenhui LiLinks : PDF - Abstract
Code :
Keywords : model - traffic - learning - simulation - dynamics -
- Hands-On Machine Learning with Scikit-Learn and TensorFlow
- AI Programming with Python
- AI for Everyone by Andrew Ng
- Spark in R using sparklyr
- Google Cloud Professional Data Engineer Specialization
- Practical Deep Learning for Coders – Part 1
- Computational Linear Algebra by Rachel Thomas
- Deep Learning A-Z