How to deal with missing data in supervised deep learning?

Niels Bruun Ipsen, Pierre-Alexandre Mattei, Jes Frellsen*

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

148 Downloads (Pure)


The issue of missing data in supervised learning has been largely overlooked, especially in the deep learning community. We investigate strategies to adapt neural architectures to handle missing values. Here, we focus on regression and classification problems where the features are assumed to be missing at random. Of particular interest are schemes that allow to reuse as-is a
neural discriminative architecture. One scheme involves imputing the missing values with learnable constants. We propose a second novel approach that leverages recent advances in deep generative modelling. More precisely, a deep latent variable model can be learned jointly with the discriminative model, using importance-weighted variational inference in an end-to-end way. This hybrid approach, which mimics multiple imputation, also allows to impute the data, by relying on both the discriminative and the generative model. We also discuss ways of using a pre-trained generative model to train the discriminative one. In domains where powerful deep generative models are available, the hybrid approach leads to large performance gains.
Original languageEnglish
Title of host publicationProceedings of ICML Workshop on the Art of Learning with Missing Values
Number of pages5
Publication date2020
Publication statusPublished - 2020
Event37th International Conference on Machine Learning - Virtual event, Virtual, Online
Duration: 13 Jul 202018 Jul 2020


Conference37th International Conference on Machine Learning
LocationVirtual event
CityVirtual, Online
Internet address


  • Missing data
  • Supervised learning
  • Deep learning


Dive into the research topics of 'How to deal with missing data in supervised deep learning?'. Together they form a unique fingerprint.

Cite this