Abstract
It can hardly have by-passed many that the field of machine learning has seen an immense growth in popularity in the past decade, not only in terms of re- search productivity, but equally in applications and popular media. The popularization of, and developments on, neural networks is perhaps the largest contributor to this. These models have shown immense success in many applications ranging from text and image generation, to assisting diagnostic in hospitals, to being used in self-driving cars. Nevertheless, neural networks are still seeing slow adaptation in safety critical application – in large part due to poor uncertainty quantification and calibration of these models. As an example for the diagnostic assistance tools: the neural networks should not just predict yes or no when it tries to detect fractures in x-ray images, we would like the model to say “I am 90% sure a fraction is present” – and moreover have the predicted probability reflect the model’s accuracy.
In this thesis, 6 different scientific works are covered, which are all related to uncertainty quantification and calibration of machine learning models. In two of the works, we look at methods for obtaining calibrated neural networks. In the first we present a cheap training method which significantly improves calibration of almost any neural network, and in the second we investigate combinations of Bayesian Neural Networks and Deep Ensembles to similarly improve model calibration. The remaining four works are focused on use cases of models with good uncertainty quantification: two are related to Bayesian Optimisation, a black-box optimization method which utilizes probabilistic machine learning models, and the two last works are focused on how to utilize models with good uncertainty quantification in a Danish e-learning platform called WriteReader.
In this thesis, 6 different scientific works are covered, which are all related to uncertainty quantification and calibration of machine learning models. In two of the works, we look at methods for obtaining calibrated neural networks. In the first we present a cheap training method which significantly improves calibration of almost any neural network, and in the second we investigate combinations of Bayesian Neural Networks and Deep Ensembles to similarly improve model calibration. The remaining four works are focused on use cases of models with good uncertainty quantification: two are related to Bayesian Optimisation, a black-box optimization method which utilizes probabilistic machine learning models, and the two last works are focused on how to utilize models with good uncertainty quantification in a Danish e-learning platform called WriteReader.
| Original language | English |
|---|
| Publisher | Technical University of Denmark |
|---|---|
| Number of pages | 180 |
| Publication status | Published - 2024 |
Fingerprint
Dive into the research topics of 'Calibrated Machine Learning Models: How To Get Them and Why They Matter'. Together they form a unique fingerprint.Projects
- 1 Finished
-
Counterfactual Techniques in Healthcare
Jordahn, M. (PhD Student), Andersen, M. R. (Main Supervisor), Hansen, L. K. (Supervisor), Fortuin, V. (Examiner) & Solin, A. H. (Examiner)
01/09/2021 → 11/03/2025
Project: PhD
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver