Bounds on the Approximation Power of Feedforward Neural Networks

Research output: Contribution to journalConference articlepeer-review

Abstract

The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state-of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.

Original languageEnglish
Pages (from-to)3453-3461
Number of pages9
JournalProceedings of Machine Learning Research
Volume80
Publication statusPublished - 1 Jan 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Fingerprint

Dive into the research topics of 'Bounds on the Approximation Power of Feedforward Neural Networks'. Together they form a unique fingerprint.

Cite this