State-space models - from the EM algorithm to a gradient approach

Publication: Research - peer-reviewJournal article – Annual report year: 2007

View graph of relations

Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly prone to converge slowly, we show that gradient-based learning results in a sizable reduction of computation time
Original languageEnglish
JournalNeural Computation
Publication date2007
Volume19
Issue4
Pages1097-1111
ISSN0899-7667
DOIs
StatePublished
CitationsWeb of Science® Times Cited: 5

Keywords

  • EM, state-space models, quasi-Newton
Download as:
Download as PDF
Select render style:
APAAuthorCBEHarvardMLAStandardVancouverShortLong
PDF
Download as HTML
Select render style:
APAAuthorCBEHarvardMLAStandardVancouverShortLong
HTML
Download as Word
Select render style:
APAAuthorCBEHarvardMLAStandardVancouverShortLong
Word

ID: 6411794