TY - GEN
T1 - Dodging the Double Descent in Deep Neural Networks
AU - Quétu, Victor
AU - Tartaglione, Enzo
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023/1/1
Y1 - 2023/1/1
N2 - Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the "double descent", has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off?Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple ℓ2 regularization is already positively contributing to such a perspective.
AB - Finding the optimal size of deep learning models is very actual and of broad impact, especially in energy-saving schemes. Very recently, an unexpected phenomenon, the "double descent", has caught the attention of the deep learning community. As the model's size grows, the performance gets first worse and then goes back to improving. It raises serious questions about the optimal model's size to maintain high generalization: the model needs to be sufficiently over-parametrized, but adding too many parameters wastes training resources. Is it possible to find, in an efficient way, the best trade-off?Our work shows that the double descent phenomenon is potentially avoidable with proper conditioning of the learning problem, but a final answer is yet to be found. We empirically observe that there is hope to dodge the double descent in complex scenarios with proper regularization, as a simple ℓ2 regularization is already positively contributing to such a perspective.
KW - Double descent
KW - deep learning
KW - pruning
KW - regularization
U2 - 10.1109/ICIP49359.2023.10222624
DO - 10.1109/ICIP49359.2023.10222624
M3 - Conference contribution
AN - SCOPUS:85180786350
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 1625
EP - 1629
BT - 2023 IEEE International Conference on Image Processing, ICIP 2023 - Proceedings
PB - IEEE Computer Society
T2 - 30th IEEE International Conference on Image Processing, ICIP 2023
Y2 - 8 October 2023 through 11 October 2023
ER -