Convolutional Neural Networks - Generalizability and Interpretations

Research output: Book/ReportPh.D. thesis

1122 Downloads (Pure)

Abstract

Sufficient data is key when training Machine Learning algorithms in order to obtain models that generalize for operational use. Sometimes sufficient data is infeasible to obtain and this prevents the use of Machine Learning in many applications. The goal of this thesis is to gain insights and learn from data despite it being limited in amount or context representation. Within Machine Learning this thesis focuses on Convolutional Neural Networks for Computer Vision. The research aims to answer how to explore a model's generalizability to the whole population of data samples and how to interpret the model's function. The thesis presents three overall approaches to gaining insights on generalizability and interpretation. First, one can change the main objective of a problem to study expected insufficiencies and based on this make better a choice of model. For this first approach the thesis presents both a study on translational invariance as well as an example of changing the objective of a problem from classiffication to segmentation to robustly extract lower level information. The second approach is the use of simulated data which can help by inferring knowledge in our model if real data is scarce. The results show clear advantages both when using rendered Synthetic Aperture Radar images, but also when predictions from physical models are used as target variables which are matched with real data to form a large dataset. The third approach to cope with data insufficiencies is to visualize and understand the internal representations of a model. This approach is explored and concrete examples of learnings that can be obtained are shown. There is no doubt that large quantities of well representing data is the best foundation for training Machine Learning models. On the other hand, there are many tools and techniques available to interpret and understand properties of our models. With these at hand we can still learn about our models and use this knowledge to e.g. collect better datasets or improve on the modeling.
Original languageEnglish
PublisherTechnical University of Demark
Number of pages139
Publication statusPublished - 2018
SeriesDTU Compute PHD-2017
Volume459
ISSN0909-3192

Cite this

Malmgren-Hansen, D. (2018). Convolutional Neural Networks - Generalizability and Interpretations. Technical University of Demark. DTU Compute PHD-2017, Vol.. 459