TY - GEN
T1 - An Experimental Study of the Impact of Pre-Training on the Pruning of a Convolutional Neural Network
AU - Hubens, Nathan
AU - Mancas, Matei
AU - Decombas, Marc
AU - Preda, Marius
AU - Zaharia, Titus
AU - Gosselin, Bernard
AU - Dutoit, Thierry
N1 - Publisher Copyright:
© 2020 ACM.
PY - 2020/1/7
Y1 - 2020/1/7
N2 - In recent years, deep neural networks have known a wide success in various application domains. However, they require important computational and memory resources, which severely hinders their deployment, notably on mobile devices or for real-time applications. Neural networks usually involve a large number of parameters, which correspond to the weights of the network. Such parameters, obtained with the help of a training process, are determinant for the performance of the network. However, they are also highly redundant. The pruning methods notably attempt to reduce the size of the parameter set, by identifying and removing the irrelevant weights. In this paper, we examine the impact of the training strategy on the pruning efficiency. Two training modalities are considered and compared: (1) fine-tuned and (2) from scratch. The experimental results obtained on four datasets (CIFAR10, CIFAR100, SVHN and Caltech101) and for two different CNNs (VGG16 and MobileNet) demonstrate that a network that has been pre-trained on a large corpus (e.g. ImageNet) and then fine-tuned on a particular dataset can be pruned much more efficiently (up to 80% of parameter reduction) than the same network trained from scratch.
AB - In recent years, deep neural networks have known a wide success in various application domains. However, they require important computational and memory resources, which severely hinders their deployment, notably on mobile devices or for real-time applications. Neural networks usually involve a large number of parameters, which correspond to the weights of the network. Such parameters, obtained with the help of a training process, are determinant for the performance of the network. However, they are also highly redundant. The pruning methods notably attempt to reduce the size of the parameter set, by identifying and removing the irrelevant weights. In this paper, we examine the impact of the training strategy on the pruning efficiency. Two training modalities are considered and compared: (1) fine-tuned and (2) from scratch. The experimental results obtained on four datasets (CIFAR10, CIFAR100, SVHN and Caltech101) and for two different CNNs (VGG16 and MobileNet) demonstrate that a network that has been pre-trained on a large corpus (e.g. ImageNet) and then fine-tuned on a particular dataset can be pruned much more efficiently (up to 80% of parameter reduction) than the same network trained from scratch.
KW - CNN compression
KW - Fine-tuning
KW - Neural Network Pruning
U2 - 10.1145/3378184.3378224
DO - 10.1145/3378184.3378224
M3 - Conference contribution
AN - SCOPUS:85081085465
T3 - ACM International Conference Proceeding Series
BT - Proceedings of APPIS 2020 - 3rd International Conference on Applications of Intelligent Systems
A2 - Petkov, Nicolai
A2 - Strisciuglio, Nicola
A2 - Travieso-Gonzalez, Carlos M.
PB - Association for Computing Machinery
T2 - 3rd International Conference on Applications of Intelligent Systems, APPIS 2020
Y2 - 7 January 2020 through 9 January 2020
ER -