Mini-batch stochastic approaches for accelerated multiplicative updates in nonnegative matrix factorisation with beta-divergence

Romain Serizel, Slim Essid, Gael Richard

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Nonnegative matrix factorisation (NMF) with β-divergence is a popular method to decompose real world data. In this paper we propose mini-batch stochastic algorithms to perform NMF efficiently on large data matrices. Besides the stochastic aspect, the mini-batch approach allows exploiting intensive computing devices such as general purpose graphical processing units to decrease the processing time and in some cases outperform coordinate descent approach.

Original languageEnglish
Title of host publication2016 IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings
EditorsKostas Diamantaras, Aurelio Uncini, Francesco A. N. Palmieri, Jan Larsen
PublisherIEEE Computer Society
ISBN (Electronic)9781509007462
DOIs
Publication statusPublished - 8 Nov 2016
Externally publishedYes
Event26th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings - Vietri sul Mare, Salerno, Italy
Duration: 13 Sept 201616 Sept 2016

Publication series

NameIEEE International Workshop on Machine Learning for Signal Processing, MLSP
Volume2016-November
ISSN (Print)2161-0363
ISSN (Electronic)2161-0371

Conference

Conference26th IEEE International Workshop on Machine Learning for Signal Processing, MLSP 2016 - Proceedings
Country/TerritoryItaly
CityVietri sul Mare, Salerno
Period13/09/1616/09/16

Keywords

  • GPGPU
  • Nonnegative matrix factorisation
  • multiplicative rules
  • online learning

Fingerprint

Dive into the research topics of 'Mini-batch stochastic approaches for accelerated multiplicative updates in nonnegative matrix factorisation with beta-divergence'. Together they form a unique fingerprint.

Cite this