Learning hidden Markov models with persistent states by penalizing jumps

Peter Nystrup*, Erik Lindström, Henrik Madsen

*Corresponding author for this work

Research output: Contribution to journalJournal articleResearchpeer-review

Abstract

Hidden Markov models are applied in many expert and intelligent systems to detect an underlying sequence of persistent states. When the model is misspecified or misestimated, however, it often leads to unrealistically rapid switching dynamics. To address this issue, we propose a novel estimation approach based on clustering temporal features while penalizing jumps. We compare the approach to spectral clustering and the standard approach of maximizing the likelihood function in an extensive simulation study and an application to financial data. The advantages of the proposed jump estimator include that it learns the hidden state sequence and model parameters simultaneously and faster while providing control over the transition rate, it is less sensitive to initialization, it performs better when the number of states increases, and it is robust to misspecified conditional distributions. The value of estimating the true persistence of the state process is illustrated through a simple trading strategy where improved estimates result in much lower transaction costs. Robustness is particularly critical when the model is part of a system used in production. Therefore, our proposed estimator significantly improves the potential for using hidden Markov models in practical applications.

Original languageEnglish
Article number113307
JournalExpert Systems with Applications
Volume150
Number of pages14
ISSN0957-4174
DOIs
Publication statusPublished - 15 Jul 2020

Keywords

  • Clustering
  • Dynamic programming
  • Regime switching
  • Regularization
  • Time series analysis
  • Unsupervised learning

Fingerprint Dive into the research topics of 'Learning hidden Markov models with persistent states by penalizing jumps'. Together they form a unique fingerprint.

Cite this