Home /  Workshop /  Schedules /  An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?

An Automatic Finite-Sample Robustness Metric: Can Dropping a Little Data Change Conclusions?

[Virtual] Hot Topics: Foundations of Stable, Generalizable and Transferable Statistical Learning March 07, 2022 - March 10, 2022

March 08, 2022 (10:00 AM PST - 10:25 AM PST)
Speaker(s): Tamara Broderick (Massachusetts Institute of Technology)
Location: SLMath: Online/Virtual
Tags/Keywords
  • robustness

  • influence

  • local robustness

  • z-estimators

  • sensitivity

Primary Mathematics Subject Classification
Secondary Mathematics Subject Classification
Video
No Video Uploaded
Abstract

One hopes that data analyses will be used to make beneficial decisions regarding people's health, finances, and well-being. But the data fed to an analysis may systematically differ from the data where these decisions are ultimately applied. For instance, suppose we analyze data in one country and conclude that microcredit is effective at alleviating poverty; based on this analysis, we decide to distribute microcredit in other locations and in future years. We might then ask: can we trust our conclusion to apply under new conditions? If we found that a very small percentage of the original data was instrumental in determining the original conclusion, we might expect the conclusion to be unstable under new conditions. So we propose a method to assess the sensitivity of data analyses to the removal of a very small fraction of the data set. Analyzing all possible data subsets of a certain size is computationally prohibitive, so we provide an approximation. We call our resulting method the Approximate Maximum Influence Perturbation. Our approximation is automatically computable, theoretically supported, and works for common estimators --- including (but not limited to) OLS, IV, GMM, MLE, MAP, and variational Bayes. We show that any non-robustness our metric finds is conclusive. Empirics demonstrate that while some applications are robust, in others the sign of a treatment effect can be changed by dropping less than 0.1% of the data --- even in simple models and even when standard errors are small.

Supplements No Notes/Supplements Uploaded
Video/Audio Files
No Video Files Uploaded