Home /  Workshop /  Schedules /  Regularity theory and uniform convergence in the large data limit of graph Laplacian eigenvectors on random data clouds

Regularity theory and uniform convergence in the large data limit of graph Laplacian eigenvectors on random data clouds

[Moved Online] Hot Topics: Optimal transport and applications to machine learning and statistics May 04, 2020 - May 08, 2020

May 07, 2020 (09:30 AM PDT - 10:30 AM PDT)
Speaker(s): Nicolas Garcia Trillos (University of Wisconsin-Madison)
Location: SLMath: Online/Virtual
Tags/Keywords
  • Graph Laplacian

  • Laplace Beltrami operator

  • manifold learning

  • random geometric graph

Primary Mathematics Subject Classification
Secondary Mathematics Subject Classification
Video

Regularity Theory And Uniform Convergence In The Large Data Limit Of Graph Laplacian Eigenvectors On Random Data Clouds.

Abstract

Graph Laplacians are omnipresent objects in machine learning that have been used in supervised, unsupervised and semi supervised settings due to their versatility in extracting local and global geometric information from data clouds. In this talk I will present an overview of how the mathematical theory built around them has gotten deeper and deeper, layer by layer, since the appearance of the first results on pointwise consistency in the 2000’s, until the most recent developments; this line of research has found strong connections between PDEs built on proximity graphs on data clouds and PDEs on manifolds, and has given a more precise mathematical meaning to the task of “manifold learning”. In the first part of the talk I will highlight how ideas from optimal transport made some of the initial steps, which provided L2 type error estimates between the spectra of graph Laplacians and Laplace Beltrami operators, possible. In the second part of the talk, which is based on recent work with Jeff Calder and Marta Lewicka, I will present a newly developed regularity theory for graph Laplacians which among other things allow us to bootstrap the L2 error estimates developed through optimal transport and upgrade them to uniform convergence and almost C^{0,1} convergence rates. The talk can be seen as a tale of how a flow of ideas from optimal transport, PDEs, and in general, analysis, has made possible a finer understanding of concrete objects that have been popular in data analysis and machine learning.

Supplements
Asset no preview Notes 3.49 MB application/pdf Download
Video/Audio Files

Regularity Theory And Uniform Convergence In The Large Data Limit Of Graph Laplacian Eigenvectors On Random Data Clouds.

H.264 Video 928_28386_8331_Regularity_Theory_and_Uniform_Convergence_in_the_Large_Data_Limit_of_Graph_Laplacian_Eigenvectors_on_Random_Data_Clouds.mp4
Troubles with video?

Please report video problems to itsupport@slmath.org.

See more of our Streaming videos on our main VMath Videos page.