Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval

Frederik Warburg, Martin Jørgensen, Javier Civera, Søren Hauberg

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

218 Downloads (Pure)

Abstract

Uncertainty quantification in image retrieval is crucial for downstream decisions, yet it remains a challenging and largely unexplored problem. Current methods for estimating uncertainties are poorly calibrated, computationally expensive, or based on heuristics. We present a new method that views image embeddings as stochastic features rather than deterministic features. Our two main contributions are (1) a likelihood that matches the triplet constraint and that evaluates the probability of an anchor being closer to a positive than a negative; and (2) a prior over the feature space that justifies the conventional l2 normalization. To ensure computational efficiency, we derive a variational approximation of the posterior, called the Bayesian triplet loss, that produces state-of-the-art uncertainty estimates and matches the predictive performance of current state-of-the-art methods.
Original languageEnglish
Title of host publicationProceedings of 2021 International Conference on Computer Vision
Number of pages11
Publication date2021
Publication statusPublished - 2021
Event2021 International Conference on Computer Vision - Virtual event
Duration: 11 Oct 202117 Oct 2021
https://iccv2021.thecvf.com/

Conference

Conference2021 International Conference on Computer Vision
LocationVirtual event
Period11/10/202117/10/2021
Internet address

Fingerprint

Dive into the research topics of 'Bayesian Triplet Loss: Uncertainty Quantification in Image Retrieval'. Together they form a unique fingerprint.

Cite this