TY - JOUR
T1 - Distributed Personalized Gradient Tracking with Convex Parametric Models
AU - Notarnicola, Ivano
AU - Simonetto, Andrea
AU - Farina, Francesco
AU - Notarstefano, Giuseppe
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - We present a distributed optimization algorithm for solving online personalized optimization problems over a network of computing and communicating nodes, each of which linked to a specific user. The local objective functions are assumed to have a composite structure and to consist of a known time-varying (engineering) part and an unknown (user-specific) part. Regarding the unknown part, it is assumed to have a known parametric (e.g., quadratic) structure a priori, whose parameters are to be learned along with the evolution of the algorithm. The algorithm is composed of two intertwined components: 1) a dynamic gradient tracking scheme for finding local solution estimates and 2) a recursive least squares scheme for estimating the unknown parameters via user's noisy feedback on the local solution estimates. The algorithm is shown to exhibit a bounded regret under suitable assumptions. Finally, a numerical example corroborates the theoretical analysis.
AB - We present a distributed optimization algorithm for solving online personalized optimization problems over a network of computing and communicating nodes, each of which linked to a specific user. The local objective functions are assumed to have a composite structure and to consist of a known time-varying (engineering) part and an unknown (user-specific) part. Regarding the unknown part, it is assumed to have a known parametric (e.g., quadratic) structure a priori, whose parameters are to be learned along with the evolution of the algorithm. The algorithm is composed of two intertwined components: 1) a dynamic gradient tracking scheme for finding local solution estimates and 2) a recursive least squares scheme for estimating the unknown parameters via user's noisy feedback on the local solution estimates. The algorithm is shown to exhibit a bounded regret under suitable assumptions. Finally, a numerical example corroborates the theoretical analysis.
KW - Distributed learning
KW - distributed optimization
KW - online optimization
U2 - 10.1109/TAC.2022.3147007
DO - 10.1109/TAC.2022.3147007
M3 - Article
AN - SCOPUS:85124215120
SN - 0018-9286
VL - 68
SP - 588
EP - 595
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 1
ER -