Projects per year
Abstract
This thesis consist of 5 independent pieces of work divided over 4 chapters. This manuscript tries to bind them together under one common theme: how prior
knowledge can be used to incorporate inductive biases into deep learning models. The first chapter, reviews the progress of inductive biases in supervised deep learning models, especially the spatial transformer architectures. In the chapter, we propose improvements to this architecture to make the inductive bias
more general using recent progress within diffiomorphic transformations. The second chapter, studies generative models and how these are being used within
disentangled representation learning. The chapter presents a new way of doing disentanglement, by explicitly incorporating a spatial inductive bias into a generative model, that allows it to disentangle certain factors in data into different latent spaces. The third chapter investigates how to improve variance estimation in neural networks. This is done by investigating contemporary methods, that either tries to make the training more robust or incorporate a inductive bias to make the predictive variance behave more like how prior knowledge dictates. The fourth, and last, chapter investigates the field of representation learning in the context of proteins sequences. The chapter shows how many current practices for learning meaningful representations are suboptimal, and how taking the data manifold into account can lead to more meaningful representations.
knowledge can be used to incorporate inductive biases into deep learning models. The first chapter, reviews the progress of inductive biases in supervised deep learning models, especially the spatial transformer architectures. In the chapter, we propose improvements to this architecture to make the inductive bias
more general using recent progress within diffiomorphic transformations. The second chapter, studies generative models and how these are being used within
disentangled representation learning. The chapter presents a new way of doing disentanglement, by explicitly incorporating a spatial inductive bias into a generative model, that allows it to disentangle certain factors in data into different latent spaces. The third chapter investigates how to improve variance estimation in neural networks. This is done by investigating contemporary methods, that either tries to make the training more robust or incorporate a inductive bias to make the predictive variance behave more like how prior knowledge dictates. The fourth, and last, chapter investigates the field of representation learning in the context of proteins sequences. The chapter shows how many current practices for learning meaningful representations are suboptimal, and how taking the data manifold into account can lead to more meaningful representations.
Original language | English |
---|
Publisher | Technical University of Denmark |
---|---|
Number of pages | 156 |
Publication status | Published - 2020 |
Fingerprint
Dive into the research topics of 'Learning invariant representations from prior knowledge in Deep Learning'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Deep Metric Learning
Detlefsen, N. S. (PhD Student), Rainforth, T. (Examiner), Ferkinghoff-Borg, J. (Examiner), Hauberg, S. (Main Supervisor), Winther, O. (Supervisor) & Alstrøm, T. S. (Examiner)
01/09/2017 → 14/04/2021
Project: PhD