Error bounds for deep ReLU networks using the Kolmogorov–Arnold superposition theorem

Hadrien Montanelli, Haizhao Yang

Research output: Contribution to journalArticlepeer-review

Abstract

We prove a theorem concerning the approximation of multivariate functions by deep ReLU networks, for which the curse of the dimensionality is lessened. Our theorem is based on a constructive proof of the Kolmogorov–Arnold superposition theorem, and on a subset of multivariate continuous functions whose outer superposition functions can be efficiently approximated by deep ReLU networks.

Original languageEnglish
Pages (from-to)1-6
Number of pages6
JournalNeural Networks
Volume129
DOIs
Publication statusPublished - 1 Sept 2020
Externally publishedYes

Keywords

  • Approximation theory
  • Curse of dimensionality
  • Deep ReLU networks
  • Kolmogorov–Arnold superposition theorem

Fingerprint

Dive into the research topics of 'Error bounds for deep ReLU networks using the Kolmogorov–Arnold superposition theorem'. Together they form a unique fingerprint.

Cite this