Was that so Hard? Estimating Human Classification Difficulty

Morten Rieger Hannemose*, Josefine Vilsbøll Sundgaard, Niels Kvorning Ternov, Rasmus R. Paulsen, Anders Nymark Christensen

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

67 Downloads (Pure)


When doctors are trained to diagnose a specific disease, they learn faster when presented with cases in order of increasing difficulty. This creates the need for automatically estimating how difficult it is for doctors to classify a given case. In this paper, we introduce methods for estimating how hard it is for a doctor to diagnose a case represented by a medical image, both when ground truth difficulties are available for training, and when they are not. Our methods are based on embeddings obtained with deep metric learning. Additionally, we introduce a practical method for obtaining ground truth human difficulty for each image case in a dataset using self-assessed certainty. We apply our methods to two different medical datasets, achieving high Kendall rank correlation coefficients on both, showing that we outperform existing methods by a large margin on our problem and data.

Original languageEnglish
Title of host publicationProceedings of Applications of Medical Artificial Intelligence
PublisherSpringer Science and Business Media Deutschland GmbH
Publication date2022
ISBN (Print)978-3-031-17720-0
ISBN (Electronic)978-3-031-17721-7
Publication statusPublished - 2022
Event1st International Workshop on Applications of Medical Artificial Intelligence - Virtual, Online, Singapore
Duration: 18 Sept 202218 Sept 2022


Workshop1st International Workshop on Applications of Medical Artificial Intelligence
CityVirtual, Online
SeriesLecture Notes in Computer Science


  • Deep metric learning
  • Difficulty estimation
  • Human classification


Dive into the research topics of 'Was that so Hard? Estimating Human Classification Difficulty'. Together they form a unique fingerprint.

Cite this