Abstract
The particle-based rapid incremental smoother (PARIS) is a sequential Monte Carlo technique that allows for efficient online approximations of expectations of additive functionals under Feynman–Kac path distributions. Under weak assumptions, the algorithm has linear computational complexity and limited memory requirements. It also comes with a number of nonasymptotic bounds and convergence results. However, being based on self-normalized importance sampling, the PARIS estimator is biased. This bias is inversely proportional to the number of particles, but has been found to grow linearly with the time horizon, under appropriate mixing conditions. In this work, we propose the Parisian particle Gibbs (PPG) sampler, which has essentially the same complexity as that of the PARIS, but significantly reduces the bias for a given computational complexity at the cost of a modest increase in the variance. This method is a wrapper, in the sense that it uses the PARIS algorithm in the inner loop of the particle Gibbs algorithm to form a bias-reduced version of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on the bias and variance, as well as deviation inequalities. We illustrate our theoretical results using numerical experiments that support our claims.
| Original language | English |
|---|---|
| Pages (from-to) | 1115-1144 |
| Number of pages | 30 |
| Journal | Statistica Sinica |
| Volume | 34 |
| DOIs | |
| Publication status | Published - 1 Apr 2024 |
| Externally published | Yes |
Keywords
- Bias reduction
- particle Gibbs
- particle filters
- sequential Monte Carlo
- smoothing of additive functionals
- state space smoothing