How to Train Deep Variational Autoencoders and Probabilistic Ladder Networks

Research output: Research - peer-reviewArticle in proceedings – Annual report year: 2016

Documents

Links

View graph of relations

Variational autoencoders are a powerful framework for unsupervised learning. However, previous work has been restricted to shallow models with one or two layers of fully factorized stochastic latent variables, limiting the flexibility of the latent representation. We propose three advances in training algorithms of variational autoencoders, for the first time allowing to train deep models of up to five stochastic layers, (1) using a structure similar to the Ladder network as the inference model, (2) warm-up period to support stochastic units staying active in early training, and (3) use of batch normalization. Using these improvements we show state-of-the-art log-likelihood results for generative modeling on several benchmark datasets.
Original languageEnglish
Title of host publicationProceedings of the 33rd International Conference on Machine Learning (ICML 2016)
Number of pages9
Publication date2016
StatePublished - 2016
Event33rd International Conference on Machine Learning (ICML 2016) - New York, United States
Duration: 19 Jun 201624 Jun 2016
Conference number: 33
http://icml.cc/2016/

Conference

Conference33rd International Conference on Machine Learning (ICML 2016)
Number33
CountryUnited States
CityNew York
Period19/06/201624/06/2016
Internet address
SeriesJMLR: Workshop and Conference Proceedings
Volume48
ISSN1938-7228
Download as:
Download as PDF
Select render style:
APAAuthorCBE/CSEHarvardMLAStandardVancouverShortLong
PDF
Download as HTML
Select render style:
APAAuthorCBE/CSEHarvardMLAStandardVancouverShortLong
HTML
Download as Word
Select render style:
APAAuthorCBE/CSEHarvardMLAStandardVancouverShortLong
Word

Download statistics

No data available

ID: 121765925