The compact Genetic Algorithm (cGA) evolves a probability distribution favoring optimal solutions in the underlying search space by repeatedly sampling from the distribution and updating it according to promising samples. We study the intricate dynamics of the cGA on the test function OneMax, and how its performance depends on the hypothetical population size K, which determines how quickly decisions about promising bit values are fixated in the probabilistic model. It is known that the cGA and the Univariate Marginal Distribution Algorithm (UMDA), a related algorithm whose population size is called λ, run in expected time O(nlog n) when the population size is just large enough (K=Θ(nlogn) and λ=Θ(nlogn), respectively) to avoid wrong decisions being fixated. The UMDA also shows the same performance in a very different regime (λ= Θ(log n) , equivalent to K= Θ(log n) in the cGA) with much smaller population size, but for very different reasons: many wrong decisions are fixated initially, but then reverted efficiently. If the population size is even smaller (o(log n)), the time is exponential. We show that population sizes in between the two optimal regimes are worse as they yield larger runtimes: we prove a lower bound of Ω(K1 / 3n+ nlog n) for the cGA on OneMax for K=O(n/log2n). For K= Ω(log 3n) the runtime increases with growing K before dropping again to O(Kn+nlogn) for K=Ω(nlogn). This suggests that the expected runtime for the cGA is a bimodal function in K with two very different optimal regions and worse performance in between.
- Compact genetic algorithm
- Estimation-of-distribution algorithms
- Evolutionary algorithms
- Runtime analysis