Uehara, Masatoshi, Kiyohara, Haruka, Bennett, Andrew, Chernozhukov, Victor, Jiang, Nan, Kallus, Nathan, Shi, Chengchun ORCID: 0000-0001-7773-2099 and Sun, Wenguang (2023) Future-dependent value-based off-policy evaluation in POMDPs. In: Oh, A., Naumann, T., Globerson, A., Saenko, K., Hardt, M. and Levine, S., (eds.) Advances in Neural Information Processing Systems 36 (NeurIPS 2023). Neural Information Processing Systems Foundation.
Text (Future-Dependent Value-Based Off-Policy Evaluation in POMDPs)
- Accepted Version
Download (1MB) |
Abstract
We study off-policy evaluation (OPE) for partially observable MDPs (POMDPs) with general function approximation. Existing methods such as sequential importance sampling estimators suffer from the curse of horizon in POMDPs. To circumvent this problem, we develop a novel model-free OPE method by introducing future-dependent value functions that take future proxies as inputs and perform a similar role to that of classical value functions in fully-observable MDPs. We derive a new off-policy Bellman equation for future-dependent value functions as conditional moment equations that use history proxies as instrumental variables. We further propose a minimax learning method to learn future-dependent value functions using the new Bellman equation. We obtain the PAC result, which implies our OPE estimator is close to the true policy value under Bellman completeness, as long as futures and histories contain sufficient information about latent states. Our code is available at https://github.com/aiueola/neurips2023-future-dependent-ope.
Item Type: | Book Section |
---|---|
Official URL: | https://proceedings.neurips.cc/paper_files/paper/2... |
Additional Information: | © 2023 The Author |
Divisions: | Statistics |
Subjects: | H Social Sciences > HA Statistics |
Date Deposited: | 23 Apr 2024 10:54 |
Last Modified: | 20 Dec 2024 00:20 |
URI: | http://eprints.lse.ac.uk/id/eprint/122752 |
Actions (login required)
View Item |