Ma, Tao ORCID: 0000-0002-8062-9217, Yang, Xuzhi and Szabo, Zoltan (2024) To switch or not to switch? Balanced policy switching in offline reinforcement learning. . arXiv. (Submitted)
Text (Szabo_to-switch-or-not-to-switch)
- Submitted Version
Download (321kB) |
Abstract
Reinforcement learning (RL) -- finding the optimal behaviour (also referred to as policy) maximizing the collected long-term cumulative reward -- is among the most influential approaches in machine learning with a large number of successful applications. In several decision problems, however, one faces the possibility of policy switching -- changing from the current policy to a new one -- which incurs a non-negligible cost (examples include the shifting of the currently applied educational technology, modernization of a computing cluster, and the introduction of a new webpage design), and in the decision one is limited to using historical data without the availability for further online interaction. Despite the inevitable importance of this offline learning scenario, to our best knowledge, very little effort has been made to tackle the key problem of balancing between the gain and the cost of switching in a flexible and principled way. Leveraging ideas from the area of optimal transport, we initialize the systematic study of policy switching in offline RL. We establish fundamental properties and design a Net Actor-Critic algorithm for the proposed novel switching formulation. Numerical experiments demonstrate the efficiency of our approach on multiple benchmarks of the Gymnasium.
Item Type: | Monograph (Report) |
---|---|
Official URL: | https://arxiv.org/ |
Divisions: | Statistics |
Subjects: | H Social Sciences > HA Statistics |
Date Deposited: | 09 Jul 2024 13:12 |
Last Modified: | 14 Sep 2024 05:06 |
URI: | http://eprints.lse.ac.uk/id/eprint/124144 |
Actions (login required)
View Item |