Home /  Graduate Students Seminar: "Training Shallow ReLU Networks on Noisy Data Using Hinge Loss: When Do We Overfit and is it Benign?"

Seminar

Graduate Students Seminar: "Training Shallow ReLU Networks on Noisy Data Using Hinge Loss: When Do We Overfit and is it Benign?" October 31, 2023 (02:00 PM PDT - 03:00 PM PDT)
Parent Program:
Location: SLMath: Online/Virtual, Baker Board Room
Speaker(s) Erin George (University of California, Los Angeles)
Description No Description
Keywords and Mathematics Subject Classification (MSC)
Primary Mathematics Subject Classification No Primary AMS MSC
Secondary Mathematics Subject Classification No Secondary AMS MSC
Video

Training Shallow ReLU Networks on Noisy Data Using Hinge Loss: When Do We Overfit and is it Benign?

Abstract/Media

In this talk, I present a study benign overfitting in two-layer ReLU networks trained using gradient descent and hinge loss on noisy data for binary classification, where data labels may be flipped or "corrupted".   Three conditions on the margin of the clean data are identified and give rise to three distinct training outcomes: benign overfitting, in which zero loss is achieved and with high probability test data is classified correctly; overfitting, in which zero loss is achieved but test data is misclassified with probability lower bounded by a constant; and non-overfitting, in which clean points, but not corrupt points, achieve zero loss and again with high probability test data is classified correctly.

Asset no preview Slides 730 KB application/pdf

Training Shallow ReLU Networks on Noisy Data Using Hinge Loss: When Do We Overfit and is it Benign?