Upper Bounds on the Running Time of the Univariate Marginal Distribution Algorithm on OneMax

Research output: Contribution to journalJournal article – Annual report year: 2019Researchpeer-review

View graph of relations

The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling λ solutions and adjusting the model according to the best μ samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters μ and λ. If μ≥ clog n for some constant c> 0 and λ= (1 + Θ(1)) μ, we obtain a general bound O(μn) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1 / n, 1 - 1 / n]. If μ≥c′nlogn for a constant c > 0 and λ= (1 + Θ(1)) μ, the behavior of the algorithm changes and the bound on the expected running time becomes O(μn), which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices μ= clog n and μ=c′nlogn. They also improve the previously best known upper bound O(nlog nlog log n) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for μ= clog n and λ= (1 + Θ(1)) μ.

Original languageEnglish
Issue number2
Pages (from-to)632-667
Publication statusPublished - 15 Feb 2019
CitationsWeb of Science® Times Cited: No match on DOI

    Research areas

  • Estimation-of-distribution algorithms, Randomized search heuristics, Running time analysis, UMDA

ID: 172625078