Bounds on the approximation power of feed forward neural networks

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

The approximation power of general feedforward neural networks with piecewise linear activation functions is investigated. First, lower bounds on the size of a network are established in terms of the approximation error and network depth and width. These bounds improve upon state- of-the-art bounds for certain classes of functions, such as strongly convex functions. Second, an upper bound is established on the difference of two neural networks with identical weights but different activation functions.

Original languageEnglish
Title of host publication35th International Conference on Machine Learning, ICML 2018
EditorsJennifer Dy, Andreas Krause
PublisherInternational Machine Learning Society (IMLS)
Pages5531-5539
Number of pages9
ISBN (Electronic)9781510867963
Publication statusPublished - 1 Jan 2018
Event35th International Conference on Machine Learning, ICML 2018 - Stockholm, Sweden
Duration: 10 Jul 201815 Jul 2018

Publication series

Name35th International Conference on Machine Learning, ICML 2018
Volume8

Conference

Conference35th International Conference on Machine Learning, ICML 2018
Country/TerritorySweden
CityStockholm
Period10/07/1815/07/18

Fingerprint

Dive into the research topics of 'Bounds on the approximation power of feed forward neural networks'. Together they form a unique fingerprint.

Cite this