TY - JOUR
T1 - Capacity-Resolution Trade-Off in the Optimal Learning of Multiple Low-Dimensional Manifolds by Attractor Neural Networks
AU - Battista, Aldo
AU - Monasson, Rémi
N1 - Publisher Copyright:
© 2020 American Physical Society.
PY - 2020/1/29
Y1 - 2020/1/29
N2 - Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N2 pairwise interactions in a RNN with N neurons to embed L manifolds of dimension Dâ‰N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logϵ|-D, where ϵ is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
AB - Recurrent neural networks (RNN) are powerful tools to explain how attractors may emerge from noisy, high-dimensional dynamics. We study here how to learn the ∼N2 pairwise interactions in a RNN with N neurons to embed L manifolds of dimension Dâ‰N. We show that the capacity, i.e., the maximal ratio L/N, decreases as |logϵ|-D, where ϵ is the error on the position encoded by the neural activity along each manifold. Hence, RNN are flexible memory devices capable of storing a large number of manifolds at high spatial resolution. Our results rely on a combination of analytical tools from statistical mechanics and random matrix theory, extending Gardner's classical theory of learning to the case of patterns with strong spatial correlations.
U2 - 10.1103/PhysRevLett.124.048302
DO - 10.1103/PhysRevLett.124.048302
M3 - Article
C2 - 32058781
AN - SCOPUS:85079540827
SN - 0031-9007
VL - 124
JO - Physical Review Letters
JF - Physical Review Letters
IS - 4
M1 - 048302
ER -