On the role of Model Uncertainties in Bayesian Optimization

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

7 Downloads (Pure)


Bayesian optimization (BO) is a popular method for black-box optimization, which relies on uncertainty as part of its decision-making process when deciding which experiment to perform next. However, not much work has addressed the effect of uncertainty on the performance of the BO algorithm and to what extent calibrated uncertainties improve the ability to find the global optimum. In this work, we provide an extensive study of the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models and compare them across both synthetic and real-world experiments. Our results confirm that Gaussian Processes are strong surrogate models and that they tend to outperform other popular models. Our results further show a positive association between calibration error and regret, but interestingly, this association disappears when we control for the type of model in the analysis. We also studied the effect of re-calibration and demonstrate that it generally does not lead to improved regret. Finally, we provide theoretical justification for why uncertainty calibration might be difficult to combine with BO due to the small sample sizes commonly used.
Original languageEnglish
Title of host publicationProceedings of the 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023)
Publication date2023
Publication statusPublished - 2023
Event39th Conference on Uncertainty in Artificial Intelligence : UAI - Pittsburgh, United States
Duration: 31 Jul 20234 Aug 2023


Conference39th Conference on Uncertainty in Artificial Intelligence
Country/TerritoryUnited States


Dive into the research topics of 'On the role of Model Uncertainties in Bayesian Optimization'. Together they form a unique fingerprint.

Cite this