Abstract
We analytically derive the geometrical structure of the weight space in multilayer neural networks in terms of the volumes of couplings associated with the internal representations of the training set. In this framework, focusing on the parity and committee machines, we show how to deduce learning and generalization capabilities, both reinterpreting some known properties and finding new exact results. The relationship between our approach and information theory as well as the Mitchison-Durbin calculation is established. Our results are exact in the limit of a large number K of hidden units, whereas for finite K a complete geometrical interpretation of symmetry breaking is given.
| Original language | English |
|---|---|
| Pages (from-to) | 2432-2435 |
| Number of pages | 4 |
| Journal | Physical Review Letters |
| Volume | 75 |
| Issue number | 12 |
| DOIs | |
| Publication status | Published - 1 Jan 1995 |
| Externally published | Yes |
Fingerprint
Dive into the research topics of 'Weight space structure and internal representations: A direct approach to learning and generalization in multilayer neural networks'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver