TY - JOUR
T1 - Sequential Sample Average Majorization–Minimization
AU - Fort, Gersende
AU - Forbes, Florence
AU - Nguyen, Hien Duy
N1 - Publisher Copyright:
© The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
PY - 2026/2/1
Y1 - 2026/2/1
N2 - Many statistical inference and machine learning methods rely on the ability to optimize an expectation functional, whose explicit form is intractable. The typical method for conducting such optimization is to approximate the expected value problem by a size-N sample average, often referred to as Sample Average Approximation (SAA) or M-estimation. When the solution to the SAA problem cannot be obtained in closed form, the Majorization–Minimization (MM) algorithm framework constitutes a broad class of incremental optimization solutions, relying on the iterative construction of surrogates, known as majorizers, of the original problem. The ability to solve an SAA problem depends on the availability of all N observations, contemporaneously, which is difficult when N is large or data are observed as a stream. We propose a stochastic MM algorithm that solves the expected value problem via iterative SAA majorizer constructions using sequential subsets of data, which we call Sequential Sample Average Majorization–Minimization (SAM2). Compared to previous stochastic MM algorithm variants, our method permits an extended definition of majorizers, and does not rely on convexity assumptions, smoothness assumptions, or restrictions on functional classes for objectives and majorizers. We develop a theory of stochastic convergence for SAM2, made possible via the presentation of a novel double array uniform strong law of large numbers. Examples of SAM2 algorithms are given along with a numerical demonstration of SAM2 to quantile regression problems, in the regular and sparse parameter settings, including both convex and non-convex objective functions.
AB - Many statistical inference and machine learning methods rely on the ability to optimize an expectation functional, whose explicit form is intractable. The typical method for conducting such optimization is to approximate the expected value problem by a size-N sample average, often referred to as Sample Average Approximation (SAA) or M-estimation. When the solution to the SAA problem cannot be obtained in closed form, the Majorization–Minimization (MM) algorithm framework constitutes a broad class of incremental optimization solutions, relying on the iterative construction of surrogates, known as majorizers, of the original problem. The ability to solve an SAA problem depends on the availability of all N observations, contemporaneously, which is difficult when N is large or data are observed as a stream. We propose a stochastic MM algorithm that solves the expected value problem via iterative SAA majorizer constructions using sequential subsets of data, which we call Sequential Sample Average Majorization–Minimization (SAM2). Compared to previous stochastic MM algorithm variants, our method permits an extended definition of majorizers, and does not rely on convexity assumptions, smoothness assumptions, or restrictions on functional classes for objectives and majorizers. We develop a theory of stochastic convergence for SAM2, made possible via the presentation of a novel double array uniform strong law of large numbers. Examples of SAM2 algorithms are given along with a numerical demonstration of SAM2 to quantile regression problems, in the regular and sparse parameter settings, including both convex and non-convex objective functions.
KW - Majorization-Minimization Algorithms
KW - Non-convex optimization
KW - Quantile regression
KW - Sample average approximation
KW - Stochastic optimization
UR - https://www.scopus.com/pages/publications/105023579999
U2 - 10.1007/s11222-025-10780-x
DO - 10.1007/s11222-025-10780-x
M3 - Article
AN - SCOPUS:105023579999
SN - 0960-3174
VL - 36
JO - Statistics and Computing
JF - Statistics and Computing
IS - 1
M1 - 31
ER -