Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks

Mikkel Jordahn*, Pablo M. Olmos

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

18 Downloads (Orbit)

Abstract

Deep Neural Networks (DNN) have shown great promise in many classification applications, yet are widely known to have poorly calibrated predictions when they are over-parametrized. Improving DNN calibration without comprising on model accuracy is of extreme importance and interest in safety critical applications such as in the healthcare sector. In this work, we show that decoupling the training of feature extraction layers and classification layers in over-parametrized DNN architectures such as Wide Residual Networks (WRN) and Vision Transformers (ViT) significantly improves model calibration whilst retaining accuracy, and at a low training cost. In addition, we show that placing a Gaussian prior on the last hidden layer outputs of a DNN, and training the model variationally in the classification training stage, even further improves calibration. We illustrate these methods improve calibration across ViT and WRN architectures for several image classification benchmark datasets.
Original languageEnglish
Title of host publicationProceedings of the 41st International Conference on Machine Learning, ICML'24
Volume235
PublisherProceedings of Machine Learning Research
Publication date2024
Pages22530-22550
Publication statusPublished - 2024
Event41st International Conference on Machine Learning - Vienna, Austria
Duration: 21 Jul 202427 Jul 2024

Conference

Conference41st International Conference on Machine Learning
Country/TerritoryAustria
CityVienna
Period21/07/202427/07/2024

Fingerprint

Dive into the research topics of 'Decoupling Feature Extraction and Classification Layers for Calibrated Neural Networks'. Together they form a unique fingerprint.

Cite this