×
Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), by estimating performance and/or rank of target (evaluation) policies using offline trajectories only.
Oct 11, 2023
People also ask
Off-policy evaluation (OPE) aims to estimate the performance of reinforcement learning (RL) policies using only a fixed set of offline trajectories [61], i.e., ...
Jun 14, 2024 · We formalize the problem of off-policy evaluation from logged human feedback as offline evaluation with ranked lists [13, 31, 18].
Jun 14, 2024 · This motivates us to study off-policy evaluation from logged human feedback. We formalize the problem, propose both model-based and model-free ...
May 30, 2024 · Off-policy evaluation (OPE) is important for closing the gap between offline training and evaluation of reinforcement learning (RL), ...
Sep 9, 2024 · This motivates us to study off-policy evaluation from logged human feedback. We formalize the problem, propose both model-based and model-free ...
Jun 14, 2024 · This work formalizes the problem of off-policy evaluation from logged human feedback, proposes both model-based and model-free estimators ...
Due to the mismatch between the visitation distributions of the behavior and target policies, evaluation in the off-policy setting is entirely different from ...
In this paper, we study the sample efficiency of OPE with human preference and establish a statistical guarantee for it.