Abstract
This paper takes a step towards temporal reasoning in a dynamically changing video, not in the pixel space that constitutes its frames, but in a latent space that describes the non-linear dynamics of the objects in its world. We introduce the Kalman variational auto-encoder, a framework for unsupervised learning of sequential data that disentangles two latent representations: an object’s representation, coming from a recognition model, and a latent state describing its dynamics. As a result, the evolution of the world can be imagined and missing data imputed, both without the need to generate high dimensional frames at each time step. The model is trained end-to-end on videos of a variety of simulated physical systems, and outperforms competing methods in generative and missing data imputation tasks.
Original language | English |
---|---|
Title of host publication | Proceedings of 31st Conference on Neural Information Processing Systems |
Number of pages | 13 |
Publication date | 2017 |
Publication status | Published - 2017 |
Event | 31st Conference on Neural Information Processing Systems - Long Beach, United States Duration: 4 Dec 2017 → 9 Dec 2017 |
Conference
Conference | 31st Conference on Neural Information Processing Systems |
---|---|
Country/Territory | United States |
City | Long Beach |
Period | 04/12/2017 → 09/12/2017 |