On the role of Model Uncertainties in Bayesian Optimization

Research output: Chapter in Book/Report/Conference proceedingArticle in proceedingsResearchpeer-review

58 Downloads (Pure)

Abstract

Bayesian optimization (BO) is a popular method for black-box optimization, which relies on uncertainty as part of its decision-making process when deciding which experiment to perform next. However, not much work has addressed the effect of uncertainty on the performance of the BO algorithm and to what extent calibrated uncertainties improve the ability to find the global optimum. In this work, we provide an extensive study of the relationship between the BO performance (regret) and uncertainty calibration for popular surrogate models and compare them across both synthetic and real-world experiments. Our results confirm that Gaussian Processes are strong surrogate models and that they tend to outperform other popular models. Our results further show a positive association between calibration error and regret, but interestingly, this association disappears when we control for the type of model in the analysis. We also studied the effect of re-calibration and demonstrate that it generally does not lead to improved regret. Finally, we provide theoretical justification for why uncertainty calibration might be difficult to combine with BO due to the small sample sizes commonly used.
Original languageEnglish
Title of host publicationProceedings of the 39th Conference on Uncertainty in Artificial Intelligence (UAI 2023)
Volume216
Publication date2023
Pages592–601
Publication statusPublished - 2023
Event39th Conference on Uncertainty in Artificial Intelligence - Pittsburgh, United States
Duration: 31 Jul 20234 Aug 2023

Conference

Conference39th Conference on Uncertainty in Artificial Intelligence
Country/TerritoryUnited States
CityPittsburgh
Period31/07/202304/08/2023

Fingerprint

Dive into the research topics of 'On the role of Model Uncertainties in Bayesian Optimization'. Together they form a unique fingerprint.

Cite this