TR2024-178
A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations
-
- "A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations", IEEE Conference on Decision and Control (CDC), December 2024.BibTeX TR2024-178 PDF
- @inproceedings{Ozcan2024dec,
- author = {Ozcan, Erhan Can and Giammarino, Vittorio and Queeney, James and Paschalidis, Ioannis Ch.}},
- title = {A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations},
- booktitle = {IEEE Conference on Decision and Control (CDC)},
- year = 2024,
- month = dec,
- url = {https://www.merl.com/publications/TR2024-178}
- }
,
- "A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations", IEEE Conference on Decision and Control (CDC), December 2024.
-
MERL Contact:
-
Research Areas:
Abstract:
This paper investigates how to incorporate expert observations (without explicit information on expert actions) into a deep reinforcement learning setting to improve sample efficiency. First, we formulate an augmented policy loss combining a maximum entropy reinforcement learning objective with a behavioral cloning loss that leverages a forward dynamics model. Then, we propose an algorithm that automatically adjusts the weights of each component in the augmented loss function. Experiments on a variety of continuous control tasks demonstrate that the proposed algorithm outperforms various benchmarks by effectively utilizing available expert observations.
Related News & Events
-
NEWS MERL researchers present 7 papers at CDC 2024 Date: December 16, 2024 - December 19, 2024
Where: Milan, Italy
MERL Contacts: Ankush Chakrabarty; Vedang M. Deshpande; Stefano Di Cairano; James Queeney; Abraham P. Vinod; Avishai Weiss; Gordon Wichern
Research Areas: Artificial Intelligence, Control, Dynamical Systems, Machine Learning, Multi-Physical Modeling, Optimization, RoboticsBrief- MERL researchers presented 7 papers at the recently concluded Conference on Decision and Control (CDC) 2024 in Milan, Italy. The papers covered a wide range of topics including safety shielding for stochastic model predictive control, reinforcement learning using expert observations, physics-constrained meta learning for positioning, variational-Bayes Kalman filtering, Bayesian measurement masks for GNSS positioning, divert-feasible lunar landing, and centering and stochastic control using constrained zonotopes.
As a sponsor of the conference, MERL maintained a booth for open discussions with researchers and students, and hosted a special session to discuss highlights of MERL research and work philosophy.
In addition, Ankush Chakrabarty (Principal Research Scientist, Multiphysical Systems Team) was an invited speaker in the pre-conference Workshop on "Learning Dynamics From Data" where he gave a talk on few-shot meta-learning for black-box identification using data from similar systems.
- MERL researchers presented 7 papers at the recently concluded Conference on Decision and Control (CDC) 2024 in Milan, Italy. The papers covered a wide range of topics including safety shielding for stochastic model predictive control, reinforcement learning using expert observations, physics-constrained meta learning for positioning, variational-Bayes Kalman filtering, Bayesian measurement masks for GNSS positioning, divert-feasible lunar landing, and centering and stochastic control using constrained zonotopes.