TY - GEN
T1 - Online Hyperparameter Optimization for Streaming Neural Networks
AU - Gunasekara, Nuwan
AU - Gomes, Heitor Murilo
AU - Pfahringer, Bernhard
AU - Bifet, Albert
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Neural networks have enjoyed tremendous success in many areas over the last decade. They are also receiving more and more attention in learning from data streams, which is inherently incremental. An incremental setting poses challenges for hyperparameter optimization, which is essential to obtain satisfactory network performance. To overcome this challenge, we introduce Continuously Adaptive Neural networks for Data streams (CAND). For every prediction, CAND chooses the current best network from a pool of candidates by continuously monitoring the performance of all candidate networks. The candidates are trained using different optimizers and hyperparameters. An experimental comparison against three state-of-the-art stream learning methods, over 17 benchmark streaming datasets con-firms the competitive performance of CAND, especially on high-dimensional data. We also investigate two orthogonal heuristics for accelerating Cand,which trade-off small amounts of accuracy for significant run-time gains. We observe that training on small mini-batches yields similar accuracy to single-instance fully incremental training, even on evolving data streams.
AB - Neural networks have enjoyed tremendous success in many areas over the last decade. They are also receiving more and more attention in learning from data streams, which is inherently incremental. An incremental setting poses challenges for hyperparameter optimization, which is essential to obtain satisfactory network performance. To overcome this challenge, we introduce Continuously Adaptive Neural networks for Data streams (CAND). For every prediction, CAND chooses the current best network from a pool of candidates by continuously monitoring the performance of all candidate networks. The candidates are trained using different optimizers and hyperparameters. An experimental comparison against three state-of-the-art stream learning methods, over 17 benchmark streaming datasets con-firms the competitive performance of CAND, especially on high-dimensional data. We also investigate two orthogonal heuristics for accelerating Cand,which trade-off small amounts of accuracy for significant run-time gains. We observe that training on small mini-batches yields similar accuracy to single-instance fully incremental training, even on evolving data streams.
KW - Data stream learning
KW - Evolving data streams
KW - Neural networks
U2 - 10.1109/IJCNN55064.2022.9891953
DO - 10.1109/IJCNN55064.2022.9891953
M3 - Conference contribution
AN - SCOPUS:85140753043
T3 - Proceedings of the International Joint Conference on Neural Networks
BT - 2022 International Joint Conference on Neural Networks, IJCNN 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2022 International Joint Conference on Neural Networks, IJCNN 2022
Y2 - 18 July 2022 through 23 July 2022
ER -