Improving Semi-Supervised Learning with Auxiliary Deep Generative Models

Lars Maaløe, Casper Kaae Sønderby, Søren Kaae Sønderby, Ole Winther

Research output: Contribution to conferencePaperResearchpeer-review

1631 Downloads (Pure)

Abstract

Deep generative models based upon continuous variational distributions parameterized by deep networks give state-of-the-art performance. In this paper we propose a framework for extending the latent representation with extra auxiliary variables in order to make the variational distribution more expressive for semi-supervised learning. By utilizing the stochasticity of the auxiliary variable we demonstrate how to train discriminative classifiers resulting in state-of-the-art performance within semi-supervised learning exemplified by an 0.96% error on MNIST using 100 labeled data points. Furthermore we observe empirically that using auxiliary variables increases convergence speed suggesting that less expressive variational distributions, not only lead to looser bounds but also slower model training.
Original languageEnglish
Publication date2015
Number of pages5
Publication statusPublished - 2015
EventNIPS Workshop on Advances in Approximate Bayesian Inference - Palais des Congrès de Montréal, Montréal, Canada
Duration: 7 Dec 2015 → …
http://approximateinference.org/

Workshop

WorkshopNIPS Workshop on Advances in Approximate Bayesian Inference
LocationPalais des Congrès de Montréal
Country/TerritoryCanada
CityMontréal
Period07/12/2015 → …
OtherPart of the 29th Annual Conference on Neural Information Processing Systems (NIPS 2015)
Internet address

Fingerprint

Dive into the research topics of 'Improving Semi-Supervised Learning with Auxiliary Deep Generative Models'. Together they form a unique fingerprint.

Cite this