Zhou, Hongyi, Hanna, Josiah P., Zhu, Jin ORCID: 0000-0001-8550-5822, Yang, Ying and Shi, Chengchun
ORCID: 0000-0001-7773-2099
(2025)
Demystifying the paradox of importance sampling with an estimated history-dependent behavior policy in off-policy evaluation.
In:
Proceedings of the 42nd International Conference on Machine Learning.
ACM Press.
(In Press)
![]() |
Text (Demystifying_the_Paradox_of_Importance_Sampling)
- Accepted Version
Available under License Creative Commons Attribution. Download (2MB) |
Abstract
This paper studies off-policy evaluation (OPE) in reinforcement learning with a focus on behavior policy estimation for importance sampling. Prior work has shown empirically that estimating a history-dependent behavior policy can lead to lower mean squared error (MSE) even when the true behavior policy is Markovian. However, the question of why the use of history should lower MSE remains open. In this paper, we theoretically demystify this paradox by deriving a biasvariance decomposition of the MSE of ordinary importance sampling (IS) estimators, demonstrating that history-dependent behavior policy estimation decreases their asymptotic variances while increasing their finite-sample biases. Additionally, as the estimated behavior policy conditions on a longer history, we show a consistent decrease in variance. We extend these findings to a range of other OPE estimators, including the sequential IS estimator, the doubly robust estimator and the marginalized IS estimator, with the behavior policy estimated either parametrically or nonparametrically.
Item Type: | Book Section |
---|---|
Additional Information: | © 2025 The Author(s) |
Divisions: | Statistics |
Date Deposited: | 27 Jun 2025 14:21 |
Last Modified: | 27 Jun 2025 15:42 |
URI: | http://eprints.lse.ac.uk/id/eprint/128272 |
Actions (login required)
![]() |
View Item |