Lipschitz Lifelong Reinforcement Learning

Authors

  • Erwan Lecarpentier ISAE-SUPAERO Université de Toulouse ONERA - The French Aerospace Lab
  • David Abel Brown University
  • Kavosh Asadi Brown University Amazon Web Service
  • Yuu Jinnai Brown University
  • Emmanuel Rachelson ISAE-SUPAERO Université de Toulouse
  • Michael L. Littman Brown University

DOI:

https://doi.org/10.1609/aaai.v35i9.17006

Keywords:

Reinforcement Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

We consider the problem of knowledge transfer when an agent is facing a series of Reinforcement Learning (RL) tasks. We introduce a novel metric between Markov Decision Processes and establish that close MDPs have close optimal value functions. Formally, the optimal value functions are Lipschitz continuous with respect to the tasks space. These theoretical results lead us to a value-transfer method for Lifelong RL, which we use to build a PAC-MDP algorithm with improved convergence rate. Further, we show the method to experience no negative transfer with high probability. We illustrate the benefits of the method in Lifelong RL experiments.

Downloads

Published

2021-05-18

How to Cite

Lecarpentier, E., Abel, D., Asadi, K., Jinnai, Y., Rachelson, E., & Littman, M. L. (2021). Lipschitz Lifelong Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8270-8278. https://doi.org/10.1609/aaai.v35i9.17006

Issue

Section

AAAI Technical Track on Machine Learning II