Model-Based Reinforcement Learning for Eco-Driving Control of Electric Vehicles

Heeyun Lee1, Namwook Kim#, Suk Won Cha#

With the development of autonomous vehicles, research on energy-efficient eco-driving is becoming increasingly important. The optimal control problem of determining the speed profile of the vehicle for minimizing energy consumption is a challenging problem that necessitates the consideration of various aspects, such as the vehicle energy consumption, slope of the road, and driving environment, e.g., the traffic and other vehicles on the road. In this study, an approach using reinforcement learning was applied to the eco-driving problem for electric vehicles considering road slopes. A novel model-based reinforcement learning algorithm for eco-driving was developed, which separates the vehicle’s energy consumption approximation model and driving environment model. Thus, the domain knowledge of vehicle dynamics and the powertrain system is utilized for the reinforcement learning process, while model-free characteristics are maintained by updating the approximation model using experience replay. The proposed algorithm was tested via a vehicle simulation and compared with a solution obtained using dynamic programming (DP), and as well as conventional cruise control driving with constant speed. The simulation results indicated that the speed profile optimized using model-based reinforcement learning had similar behavior to the global solution obtained via DP and energy saving performance compared with cruise control.