Learning with minibatch Wasserstein: asymptotic and gradient properties

  • Kilian Fatras
  • , Younes Zine
  • , Rémi Flamary
  • , Rémi Gribonval
  • , Nicolas Courty

Research output: Contribution to journalConference articlepeer-review

Abstract

Optimal transport distances are powerful tools to compare probability distributions and have found many applications in machine learning. Yet their algorithmic complexity prevents their direct use on large scale datasets. To overcome this challenge, practitioners compute these distances on minibatches i.e. they average the outcome of several smaller optimal transport problems. We propose in this paper an analysis of this practice, which effects are not well understood so far. We notably argue that it is equivalent to an implicit regularization of the original problem, with appealing properties such as unbiased estimators, gradients and a concentration bound around the expectation, but also with defects such as loss of distance property. Along with this theoretical analysis, we also conduct empirical experiments on gradient flows, GANs or color transfer that highlight the practical interest of this strategy.

Original languageEnglish
Pages (from-to)2131-2141
Number of pages11
JournalProceedings of Machine Learning Research
Volume108
Publication statusPublished - 1 Jan 2020
Externally publishedYes
Event23rd International Conference on Artificial Intelligence and Statistics, AISTATS 2020 - Virtual, Online
Duration: 26 Aug 202028 Aug 2020

Fingerprint

Dive into the research topics of 'Learning with minibatch Wasserstein: asymptotic and gradient properties'. Together they form a unique fingerprint.

Cite this