Policy Optimization as Online Learning with Mediator Feedback

Authors

  • Alberto Maria Metelli Politecnico di Milano
  • Matteo Papini Politecnico di Milano
  • Pierluca D'Oro Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v35i10.17083

Keywords:

Reinforcement Learning

Abstract

Policy Optimization (PO) is a widely used approach to address continuous control tasks. In this paper, we introduce the notion of mediator feedback that frames PO as an online learning problem over the policy space. The additional available information, compared to the standard bandit feedback, allows reusing samples generated by one policy to estimate the performance of other policies. Based on this observation, we propose an algorithm, RANDomized-exploration policy Optimization via Multiple Importance Sampling with Truncation (RANDOMIST), for regret minimization in PO, that employs a randomized exploration strategy, differently from the existing optimistic approaches. When the policy space is finite, we show that under certain circumstances, it is possible to achieve constant regret, while always enjoying logarithmic regret. We also derive problem-dependent regret lower bounds. Then, we extend RANDOMIST to compact policy spaces. Finally, we provide numerical simulations on finite and compact policy spaces, in comparison with PO and bandit baselines.

Downloads

Published

2021-05-18

How to Cite

Metelli, A. M., Papini, M., D’Oro, P., & Restelli, M. (2021). Policy Optimization as Online Learning with Mediator Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 8958-8966. https://doi.org/10.1609/aaai.v35i10.17083

Issue

Section

AAAI Technical Track on Machine Learning III