Hierarchical VAEs Know What They Don't Know

Jakob D. Havtorn, Jes Frellsen, Søren Hauberg, Lars Maaløe

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

348 Downloads (Pure)

Abstract

Deep generative models have been demonstrated as state-of-the-art density estimators. Yet, recent work has found that they often assign a higher likelihood to data from outside the training distribution. This seemingly paradoxical behavior has caused concerns over the quality of the attained density estimates. In the context of hierarchical variational autoencoders, we provide evidence to explain this behavior by out-of-distribution data having in-distribution low-level features. We argue that this is both expected and desirable behavior. With this insight in hand, we develop a fast, scalable and fully unsupervised likelihood-ratio score for OOD detection that requires data to be in-distribution across all feature-levels. We benchmark the method on a vast set of data and model combinations and achieve state-of-the-art results on out-of-distribution detection.
Original languageEnglish
Title of host publicationProceedings of the 38th International Conference on Machine Learning
Number of pages12
PublisherInternational Machine Learning Society (IMLS)
Publication date2021
Publication statusPublished - 2021
Event38th International Conference on Machine Learning - Virtual event
Duration: 18 Jul 202124 Jul 2021
Conference number: 38
https://icml.cc/Conferences/2021

Conference

Conference38th International Conference on Machine Learning
Number38
LocationVirtual event
Period18/07/202124/07/2021
Internet address
SeriesProceedings of Machine Learning Research
Volume139
ISSN2640-3498

Fingerprint

Dive into the research topics of 'Hierarchical VAEs Know What They Don't Know'. Together they form a unique fingerprint.

Cite this