Reparameterization invariance in approximate Bayesian inference

Hrittik Roy, Marco Miani, Carl Henrik Ek, Philipp Hennig, Marvin Pförtner, Lukas Tatze, Søren Hauberg

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

22 Downloads (Pure)

Abstract

Current approximate posteriors in Bayesian neural networks (BNNs) exhibit a crucial limitation: they fail to maintain invariance under reparameterization, i.e. BNNs assign different posterior densities to different parametrizations of identical functions. This creates a fundamental flaw in the application of Bayesian principles as it breaks the correspondence between uncertainty over the parameters with uncertainty over the parametrized function. In this paper, we investigate this issue in the context of the increasingly popular linearized Laplace approximation. Specifically, it has been observed that linearized predictives alleviate the common underfitting problems of the Laplace approximation. We develop a new geometric view of reparametrizations from which we explain the success of linearization. Moreover, we demonstrate that these reparameterization invariance properties can be extended to the original neural network predictive using a Riemannian diffusion process giving a straightforward algorithm for approximate posterior sampling, which empirically improves posterior fit.
Original languageEnglish
Title of host publicationProceedings of the 38th Conference on Neural Information Processing Systems (NeurIPS 2024)
Number of pages26
Publication date2024
Publication statusPublished - 2024
Event38th Conference on Neural Information Processing Systems - Vancouver, Canada
Duration: 10 Dec 202415 Dec 2024

Conference

Conference38th Conference on Neural Information Processing Systems
Country/TerritoryCanada
CityVancouver
Period10/12/202415/12/2024

Fingerprint

Dive into the research topics of 'Reparameterization invariance in approximate Bayesian inference'. Together they form a unique fingerprint.

Cite this