On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning

Research output: Contribution to journalConference articlepeer-review

Abstract

This paper studies the exponential stability of random matrix products driven by a general (possibly unbounded) state space Markov chain. It is a cornerstone in the analysis of stochastic algorithms in machine learning (e.g. for parameter tracking in online-learning or reinforcement learning). The existing results impose strong conditions such as uniform boundedness of the matrix-valued functions and uniform ergodicity of the Markov chains. Our main contribution is an exponential stability result for the p-th moment of random matrix product, provided that (i) the underlying Markov chain satisfies a super-Lyapunov drift condition, (ii) the growth of the matrix-valued functions is controlled by an appropriately defined function (related to the drift condition). Using this result, we give finite-time p-th moment bounds for constant and decreasing stepsize linear stochastic approximation schemes with Markovian noise on general state space. We illustrate these findings for linear value-function estimation in reinforcement learning. We provide finite-time p-th moment bound for various members of temporal difference (TD) family of algorithms.

Original languageEnglish
Pages (from-to)1711-1752
Number of pages42
JournalProceedings of Machine Learning Research
Volume134
Publication statusPublished - 1 Jan 2021
Externally publishedYes
Event34th Conference on Learning Theory, COLT 2021 - Boulder, United States
Duration: 15 Aug 202119 Aug 2021

Keywords

  • Markov chains
  • TD-learning
  • linear stochastic approximation
  • stability of random matrix product

Fingerprint

Dive into the research topics of 'On the Stability of Random Matrix Product with Markovian Noise: Application to Linear Stochastic Approximation and TD Learning'. Together they form a unique fingerprint.

Cite this