Pruning Artificial Neural Networks: A Way to Find Well-Generalizing, High-Entropy Sharp Minima

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Recently, a race towards the simplification of deep networks has begun, showing that it is effectively possible to reduce the size of these models with minimal or no performance loss. However, there is a general lack in understanding why these pruning strategies are effective. In this work, we are going to compare and analyze pruned solutions with two different pruning approaches, one-shot and gradual, showing the higher effectiveness of the latter. In particular, we find that gradual pruning allows access to narrow, well-generalizing minima, which are typically ignored when using one-shot approaches. In this work we also propose PSP-entropy, a measure to understand how a given neuron correlates to some specific learned classes. Interestingly, we observe that the features extracted by iteratively-pruned models are less correlated to specific classes, potentially making these models a better fit in transfer learning approaches.

Original languageEnglish
Title of host publicationArtificial Neural Networks and Machine Learning – ICANN 2020 - 29th International Conference on Artificial Neural Networks, Proceedings
EditorsIgor Farkaš, Paolo Masulli, Stefan Wermter
PublisherSpringer Science and Business Media Deutschland GmbH
Pages67-78
Number of pages12
ISBN (Print)9783030616151
DOIs
Publication statusPublished - 1 Jan 2020
Externally publishedYes
Event29th International Conference on Artificial Neural Networks, ICANN 2020 - Bratislava, Slovakia
Duration: 15 Sept 202018 Sept 2020

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume12397 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference29th International Conference on Artificial Neural Networks, ICANN 2020
Country/TerritorySlovakia
CityBratislava
Period15/09/2018/09/20

Keywords

  • Deep learning
  • Entropy
  • Post synaptic potential
  • Pruning
  • Sharp minima

Fingerprint

Dive into the research topics of 'Pruning Artificial Neural Networks: A Way to Find Well-Generalizing, High-Entropy Sharp Minima'. Together they form a unique fingerprint.

Cite this