### Abstract

Latent variable models (LVMs) learn probabilistic models of data manifolds
lying in an \emph{ambient} Euclidean space. In a number of applications, a
priori known spatial constraints can shrink the ambient space into a
considerably smaller manifold. Additionally, in these applications the
Euclidean geometry might induce a suboptimal similarity measure, which could be
improved by choosing a different metric. Euclidean models ignore such
information and assign probability mass to data points that can never appear as
data, and vastly different likelihoods to points that are similar under the
desired metric. We propose the wrapped Gaussian process latent variable model
(WGPLVM), that extends Gaussian process latent variable models to take values
strictly on a given ambient Riemannian manifold, making the model blind to
impossible data points. This allows non-linear, probabilistic inference of
low-dimensional Riemannian submanifolds from data. Our evaluation on diverse
datasets show that we improve performance on several tasks, including encoding,
visualization and uncertainty quantification.

Original language | English |
---|---|

Title of host publication | Proceedings of Machine Learning Research |

Volume | 89 |

Publication date | 2019 |

Pages | 2368-2377 |

Publication status | Published - 2019 |

Event | 22nd International Conference on Artificial Intelligence and Statistics - LOISIR Hotel, Naha, Japan Duration: 16 Apr 2019 → 18 Apr 2019 |

### Conference

Conference | 22nd International Conference on Artificial Intelligence and Statistics |
---|---|

Location | LOISIR Hotel |

Country | Japan |

City | Naha |

Period | 16/04/2019 → 18/04/2019 |

## Cite this

Mallasto, A., Hauberg, S., & Feragen, A. (2019). Probabilistic Riemannian submanifold learning with wrapped Gaussian process latent variable models. In

*Proceedings of Machine Learning Research*(Vol. 89, pp. 2368-2377)