Home /  Workshop /  Schedules /  Learning with entropy-regularized optimal transport

Learning with entropy-regularized optimal transport

[Moved Online] Hot Topics: Optimal transport and applications to machine learning and statistics May 04, 2020 - May 08, 2020

May 08, 2020 (09:30 AM PDT - 10:30 AM PDT)
Speaker(s): Aude Genevay (Massachusetts Institute of Technology)
Location: SLMath: Online/Virtual
Tags/Keywords
  • optimal transport

  • machine learning

Primary Mathematics Subject Classification No Primary AMS MSC
Secondary Mathematics Subject Classification No Secondary AMS MSC
Video

Learning with entropy-regularized optimal transport

Abstract

Entropy-regularized OT (EOT) was first introduced by Cuturi in 2013 as a solution to the computational burden of OT for machine learning problems. In this talk, after studying the properties of EOT, we will introduce a new family of losses between probability measures called Sinkhorn Divergences. Based on EOT, this family of losses actually interpolates between OT (no regularization) and MMD (infinite regularization). We will illustrate these theoretical claims on a set of learning problems formulated as minimizations over the space of measures.

Supplements No Notes/Supplements Uploaded
Video/Audio Files

Learning with entropy-regularized optimal transport

H.264 Video 928_28387_8341_Learning_with_Entropy-Regularized_Optimal_Transport.mp4
Troubles with video?

Please report video problems to itsupport@slmath.org.

See more of our Streaming videos on our main VMath Videos page.