Abstract
This paper deals with a parametrized family of partially observed bivariate Markov chains. We establish that, under very mild assumptions, the limit of the normalized log-likelihood function is maximized when the parameters belong to the equivalence class of the true parameter, which is a key feature for obtaining the consistency of the maximum likelihood estimators (MLEs) in well-specified models. This result is obtained in the general framework of partially dominated models. We examine two specific cases of interest, namely, hidden Markov models (HMMs) and observation-driven time series models. In contrast with previous approaches, the identifiability is addressed by relying on the uniqueness of the invariant distribution of the Markov chain associated to the complete data, regardless its rate of convergence to the equilibrium.
| Original language | English |
|---|---|
| Pages (from-to) | 2357-2383 |
| Number of pages | 27 |
| Journal | Annals of Applied Probability |
| Volume | 26 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - 1 Aug 2016 |
| Externally published | Yes |
Keywords
- Consistency
- Ergodicity
- Hidden markov models
- Maximum likelihood
- Observation-driven models
- Time series of counts