Cookies?
Library Header Image
LSE Research Online LSE Library Services

Deep jump learning for off-policy evaluation in continuous treatment settings

Cai, Hengrui, Shi, Chengchun ORCID: 0000-0001-7773-2099, Song, Rui and Lu, Wenbin (2021) Deep jump learning for off-policy evaluation in continuous treatment settings. In: Proceedings of the 35th Conference on Neural Information Processing Systems. UNSPECIFIED.

[img] Text (DJL_arXiv) - Accepted Version
Download (1MB)

Abstract

We consider off-policy evaluation (OPE) in continuous treatment settings, such as personalized dose-finding. In OPE, one aims to estimate the mean outcome under a new treatment decision rule using historical data generated by a different decision rule. Most existing works on OPE focus on discrete treatment settings. To handle continuous treatments, we develop a novel estimation method for OPE using deep jump learning. The key ingredient of our method lies in adaptively discretizing the treatment space using deep discretization, by leveraging deep learning and multi-scale change point detection. This allows us to apply existing OPE methods in discrete treatments to handle continuous treatments. Our method is further justified by theoretical results, simulations, and a real application to Warfarin Dosing.

Item Type: Book Section
Official URL: https://proceedings.neurips.cc/
Additional Information: © 2021 The Authors
Divisions: Statistics
Subjects: H Social Sciences > HA Statistics
R Medicine > RA Public aspects of medicine
Date Deposited: 13 Oct 2021 10:45
Last Modified: 20 Dec 2024 00:19
URI: http://eprints.lse.ac.uk/id/eprint/112419

Actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics