Abstract
Finding a dataset of minimal cardinality to characterize the optimal parameters of a model is of paramount importance in machine learning and distributed optimization over a network. This paper investigates the compressibility of large datasets. More specifically, we propose a framework that jointly learns the input-output mapping as well as the most representative samples of the dataset (sufficient dataset). Our analytical results show that the cardinality of the sufficient dataset increases sub-linearly with respect to the original dataset size. Numerical evaluations of real datasets reveal a large compressibility, up to 95%, without a noticeable drop in the learnability performance, measured by the generalization error.
| Original language | English |
|---|---|
| Pages (from-to) | 2191-2200 |
| Number of pages | 10 |
| Journal | Proceedings of Machine Learning Research |
| Volume | 97 |
| Publication status | Published - 1 Jan 2019 |
| Event | 36th International Conference on Machine Learning, ICML 2019 - Long Beach, United States Duration: 9 Jun 2019 → 15 Jun 2019 |