Upper Bounds on the Running Time of the Univariate Marginal Distribution Algorithm on OneMax

Carsten Witt*

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

14 Downloads (Pure)

Abstract

The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling λ solutions and adjusting the model according to the best μ samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters μ and λ. If μ≥ clog n for some constant c> 0 and λ= (1 + Θ(1)) μ, we obtain a general bound O(μn) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1 / n, 1 - 1 / n]. If μ≥c′nlogn for a constant c > 0 and λ= (1 + Θ(1)) μ, the behavior of the algorithm changes and the bound on the expected running time becomes O(μn), which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices μ= clog n and μ=c′nlogn. They also improve the previously best known upper bound O(nlog nlog log n) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for μ= clog n and λ= (1 + Θ(1)) μ.

Original languageEnglish
JournalAlgorithmica
Volume81
Issue number2
Pages (from-to)632-667
ISSN0178-4617
DOIs
Publication statusPublished - 15 Feb 2019

Keywords

  • Estimation-of-distribution algorithms
  • Randomized search heuristics
  • Running time analysis
  • UMDA

Cite this

@article{992730e7cfa1470193c64bcf14498122,
title = "Upper Bounds on the Running Time of the Univariate Marginal Distribution Algorithm on OneMax",
abstract = "The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling λ solutions and adjusting the model according to the best μ samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters μ and λ. If μ≥ clog n for some constant c> 0 and λ= (1 + Θ(1)) μ, we obtain a general bound O(μn) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1 / n, 1 - 1 / n]. If μ≥c′nlogn for a constant c ′ > 0 and λ= (1 + Θ(1)) μ, the behavior of the algorithm changes and the bound on the expected running time becomes O(μn), which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices μ= clog n and μ=c′nlogn. They also improve the previously best known upper bound O(nlog nlog log n) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for μ= clog n and λ= (1 + Θ(1)) μ.",
keywords = "Estimation-of-distribution algorithms, Randomized search heuristics, Running time analysis, UMDA",
author = "Carsten Witt",
year = "2019",
month = "2",
day = "15",
doi = "10.1007/s00453-018-0463-0",
language = "English",
volume = "81",
pages = "632--667",
journal = "Algorithmica",
issn = "0178-4617",
publisher = "Springer New York",
number = "2",

}

Upper Bounds on the Running Time of the Univariate Marginal Distribution Algorithm on OneMax. / Witt, Carsten.

In: Algorithmica, Vol. 81, No. 2, 15.02.2019, p. 632-667.

Research output: Contribution to journalJournal articleResearchpeer-review

TY - JOUR

T1 - Upper Bounds on the Running Time of the Univariate Marginal Distribution Algorithm on OneMax

AU - Witt, Carsten

PY - 2019/2/15

Y1 - 2019/2/15

N2 - The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling λ solutions and adjusting the model according to the best μ samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters μ and λ. If μ≥ clog n for some constant c> 0 and λ= (1 + Θ(1)) μ, we obtain a general bound O(μn) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1 / n, 1 - 1 / n]. If μ≥c′nlogn for a constant c ′ > 0 and λ= (1 + Θ(1)) μ, the behavior of the algorithm changes and the bound on the expected running time becomes O(μn), which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices μ= clog n and μ=c′nlogn. They also improve the previously best known upper bound O(nlog nlog log n) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for μ= clog n and λ= (1 + Θ(1)) μ.

AB - The Univariate Marginal Distribution Algorithm (UMDA) is a randomized search heuristic that builds a stochastic model of the underlying optimization problem by repeatedly sampling λ solutions and adjusting the model according to the best μ samples. We present a running time analysis of the UMDA on the classical OneMax benchmark function for wide ranges of the parameters μ and λ. If μ≥ clog n for some constant c> 0 and λ= (1 + Θ(1)) μ, we obtain a general bound O(μn) on the expected running time. This bound crucially assumes that all marginal probabilities of the algorithm are confined to the interval [1 / n, 1 - 1 / n]. If μ≥c′nlogn for a constant c ′ > 0 and λ= (1 + Θ(1)) μ, the behavior of the algorithm changes and the bound on the expected running time becomes O(μn), which typically holds even if the borders on the marginal probabilities are omitted. The results supplement the recently derived lower bound Ω(μn+nlogn) by Krejca and Witt (Proceedings of FOGA 2017, ACM Press, New York, pp 65–79, 2017) and turn out to be tight for the two very different choices μ= clog n and μ=c′nlogn. They also improve the previously best known upper bound O(nlog nlog log n) by Dang and Lehre (Proceedings of GECCO ’15, ACM Press, New York, pp 513–518, 2015) that was established for μ= clog n and λ= (1 + Θ(1)) μ.

KW - Estimation-of-distribution algorithms

KW - Randomized search heuristics

KW - Running time analysis

KW - UMDA

U2 - 10.1007/s00453-018-0463-0

DO - 10.1007/s00453-018-0463-0

M3 - Journal article

VL - 81

SP - 632

EP - 667

JO - Algorithmica

JF - Algorithmica

SN - 0178-4617

IS - 2

ER -