State-space models - from the EM algorithm to a gradient approach

Rasmus Kongsgaard Olsson, Kaare Brandt Petersen, Tue Lehn-Schiøler

    Research output: Contribution to journalJournal articleResearchpeer-review

    Abstract

    Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly prone to converge slowly, we show that gradient-based learning results in a sizable reduction of computation time
    Original languageEnglish
    JournalNeural Computation
    Volume19
    Issue number4
    Pages (from-to)1097-1111
    ISSN0899-7667
    DOIs
    Publication statusPublished - 2007

    Keywords

    • EM
    • state-space models
    • quasi-Newton

    Fingerprint Dive into the research topics of 'State-space models - from the EM algorithm to a gradient approach'. Together they form a unique fingerprint.

    Cite this