TY - GEN
T1 - Safe grid search with optimal complexity
AU - Ndiaye, Eugene
AU - Le, Tam
AU - Fercoq, Olivier
AU - Salmon, Joseph
AU - Takeuchi, Ichiro
N1 - Publisher Copyright:
© 2019 by the author(s).
PY - 2019/1/1
Y1 - 2019/1/1
N2 - Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance ? in a unified framework and show that its complexity is 0(1/dv?) for uniformly convex loss of order d = 2 and 0(1/v?) for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression, a case that as far as we know was not handled as precisely in previous works. We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The latter has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates this criterion based on the targeted accuracy on the validation set.
AB - Popular machine learning estimators involve regularization parameters that can be challenging to tune, and standard strategies rely on grid search for this task. In this paper, we revisit the techniques of approximating the regularization path up to predefined tolerance ? in a unified framework and show that its complexity is 0(1/dv?) for uniformly convex loss of order d = 2 and 0(1/v?) for Generalized Self-Concordant functions. This framework encompasses least-squares but also logistic regression, a case that as far as we know was not handled as precisely in previous works. We leverage our technique to provide refined bounds on the validation error as well as a practical algorithm for hyperparameter tuning. The latter has global convergence guarantee when targeting a prescribed accuracy on the validation set. Last but not least, our approach helps relieving the practitioner from the (often neglected) task of selecting a stopping criterion when optimizing over the training set: our method automatically calibrates this criterion based on the targeted accuracy on the validation set.
M3 - Conference contribution
AN - SCOPUS:85077958336
T3 - 36th International Conference on Machine Learning, ICML 2019
SP - 8362
EP - 8371
BT - 36th International Conference on Machine Learning, ICML 2019
PB - International Machine Learning Society (IMLS)
T2 - 36th International Conference on Machine Learning, ICML 2019
Y2 - 9 June 2019 through 15 June 2019
ER -