The Role of Pretrained Representations for the OOD Generalization of RL Agents

Frederik Träuble, Andrea Dittadi, Manuel Wüthrich, Felix Widmaier, Peter Gehler, Ole Winther, Francesco Locatello, Olivier Bachem, Bernhard Schölkopf, Stefan Bauer

Research output: Contribution to conferencePaperResearchpeer-review

67 Downloads (Pure)

Abstract

Building sample-efficient agents that generalize out-of-distribution (OOD) in real-world settings remains a fundamental unsolved problem on the path towards achieving higher-level cognition. One particularly promising approach is to begin with low-dimensional, pretrained representations of our world, which should facilitate efficient downstream learning and generalization. By training 240 representations and over 10,000 reinforcement learning (RL) policies on a simulated robotic setup, we evaluate to what extent different properties of pretrained VAE-based representations affect the OOD generalization of downstream agents. We observe that many agents are surprisingly robust to realistic distribution shifts, including the challenging sim-to-real case. In addition, we find that the generalization performance of a simple downstream proxy task reliably predicts the generalization performance of our RL agents under a wide range of OOD settings. Such proxy tasks can thus be used to select pretrained representations that will lead to agents that generalize.
Original languageEnglish
Publication date2022
Number of pages20
Publication statusPublished - 2022
EventThe Tenth International Conference on Learning Representations - Virtual
Duration: 25 Apr 202229 Apr 2022
Conference number: 10

Conference

ConferenceThe Tenth International Conference on Learning Representations
Number10
CityVirtual
Period25/04/202229/04/2022

Fingerprint

Dive into the research topics of 'The Role of Pretrained Representations for the OOD Generalization of RL Agents'. Together they form a unique fingerprint.

Cite this