Abstract
Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly prone to converge slowly, we show that gradient-based learning results in a sizable reduction of computation time
Original language | English |
---|---|
Journal | Neural Computation |
Volume | 19 |
Issue number | 4 |
Pages (from-to) | 1097-1111 |
ISSN | 0899-7667 |
DOIs | |
Publication status | Published - 2007 |
Keywords
- EM
- state-space models
- quasi-Newton