On Masked Pre-training and the Marginal Likelihood

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

7 Downloads (Pure)

Abstract

Masked pre-training removes random input dimensions and learns a model that can predict the missing values. Empirical results indicate that this intuitive form of self-supervised learning yields models that generalize very well to new domains. A theoretical understanding is, however, lacking. This paper shows that masked pretraining with a suitable cumulative scoring function corresponds to maximizing the model’s marginal likelihood, which is de facto the Bayesian model selection measure of generalization. Beyond shedding light on the success of masked pre-training, this insight also suggests that Bayesian models can be trained with appropriately designed self-supervision. Empirically, we confirm the developed theory and explore the main learning principles of masked pre-training in large language models.
Original languageEnglish
Title of host publicationProceedings of the 37th Conference on Neural Information Processing Systems
Number of pages11
Volume36
PublisherNeural Information Processing Systems Foundation
Publication statusAccepted/In press - 2024
Event37th Conference on Neural Information Processing Systems - New Orleans Ernest N. Morial Convention Center, New Orleans, United States
Duration: 10 Dec 202316 Dec 2023

Conference

Conference37th Conference on Neural Information Processing Systems
LocationNew Orleans Ernest N. Morial Convention Center
Country/TerritoryUnited States
CityNew Orleans
Period10/12/202316/12/2023

Fingerprint

Dive into the research topics of 'On Masked Pre-training and the Marginal Likelihood'. Together they form a unique fingerprint.

Cite this