Aug 28, 2023
Monday
|
09:00 AM - 09:15 AM
|
|
Welcome
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
09:30 AM - 10:30 AM
|
|
Measuring Our Chances: Risk Prediction in this World and its Betters
Cynthia Dwork (Harvard University)
|
- Location
- SLMath: Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:30 AM - 11:00 AM
|
|
Break
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 12:00 PM
|
|
Pretrial Risk Assessment on the Ground: Lessons from New Mexico
Cristopher Moore (Santa Fe Institute)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
Using data on 15,000 felony defendants who were released pretrial over a four-year period in Albuquerque, my collaborators and I audited a popular risk assessment algorithm, the Public Safety Assessment (PSA), for accuracy and fairness. But what happened afterward is even more interesting. Using the same data, we audited proposed legislation which would automatically detain large classes of defendants. By treating these laws as algorithms, and subjecting them to the same kind of scrutiny, we found that they are predictively inaccurate and would detain many people unnecessarily.
We then looked more closely at the data. Almost all studies of pretrial rearrest lump multiple types and severities of crimes together. By digging deeper, we found that rearrest for high-level felonies is very rare — about 0.1% and 1% for 1st and 2nd degree respectively. Most rearrests are for 4th degree felonies and about 1/3 are for misdemeanors or petty misdemeanors. We also found that most people with a "failure to appear" miss only one of their hearings, suggesting that they are candidates for supportive interventions rather than for detention. This is a good example of a domain where what we need is not better algorithms, but better data — and we need humans to understand what algorithms actually mean in terms of probabilities, rather than abstract scores like "6" or "orange."
Finally, I'll discuss how the debate around pretrial detention is playing out in practice. Unlike the 2016 ProPublica article, it's not about algorithms jailing people unfairly: it's the reverse, with prosecutors and politicians arguing that the PSA underestimates the risk of many dangerous defendants, and that they should be detained rather than released.
- Supplements
-
--
|
12:00 PM - 01:30 PM
|
|
Lunch
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
01:30 PM - 02:30 PM
|
|
Included-Variable Bias and Everything but the Kitchen Sink
Sharad Goel (Harvard University)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
When estimating the risk of an adverse outcome, common statistical guidance is to include all available factors to maximize predictive performance. Similarly, in observational studies of discrimination, general practice is to adjust for all potential confounds to isolate any impermissible effect of legally protected traits, like race or gender, on decisions. I’ll argue that this popular "kitchen-sink” approach can in fact worsen predictions in the first case and yield misleading estimates of discrimination in the second. I’ll connect these results to ongoing debates in algorithmic fairness, criminal justice, healthcare, and college admissions.
- Supplements
-
--
|
02:30 PM - 03:00 PM
|
|
Afternoon Tea
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
|
Aug 29, 2023
Tuesday
|
09:30 AM - 10:30 AM
|
|
The Complex Systems View of AI Ethics
Tina Eliassi-Rad (Northeastern University)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
In this talk, I will argue that we should study AI ethics from the perspective of complex systems. In particular, ML systems are not islands. To understand and mitigate the risks and harms associated with ML systems, we need to examine the broader complex systems in which ML systems operate. By broader complex systems, I mean our social, economic, and political systems [1]. Thus, we must remove our optimization blinders. That is, we should not focus only on maximizing some notion of constrained expected utility. I will provide examples from the impact of COVID-19 interventions on amplifying racial disparities in the U.S. criminal justice system [2], the impact of misinformation on democracy [3][4], the complexities of interventions for information access equality [5], and time-permitting the use of algorithms for school admissions [6][7]. All references are available at http://eliassi.org.
- Supplements
-
--
|
10:30 AM - 11:00 AM
|
|
Break
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 12:00 PM
|
|
New Challenges in Optimization for Ethical Decisions
Swati Gupta (Massachusetts Institute of Technology)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
12:00 PM - 01:30 PM
|
|
Lunch
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
01:30 PM - 02:30 PM
|
|
Thinking Critically About Fair Clustering: Past, Present, and Future
Brian Brubach (Wellesley College)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
Fair clustering encompasses a diverse group of fundamental optimization problems spanning many subdomains from unsupervised learning in machine learning to facility location in operations research. This talk will provide a broad overview of common problems and algorithmic techniques in the fair clustering literature with a particular focus on k-clustering objectives (e.g., k-center, k-means). We will then discuss challenges and opportunities for growth in this nascent research area.
- Supplements
-
--
|
02:30 PM - 03:00 PM
|
|
Afternoon Tea
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
03:00 PM - 04:00 PM
|
|
Fair Clustering and Polling Places
Kristian Lum (University of Chicago)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
04:00 PM - 06:20 PM
|
|
Reception
|
- Location
- SLMath: Front Courtyard
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
|
Aug 30, 2023
Wednesday
|
09:30 AM - 10:30 AM
|
|
Tradeoffs in Machine Learning
Yaim Cooper (University of Notre Dame)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
In this talk, I'll discuss three classical and influential tradeoffs in machine learning: the bias-variance tradeoff, accuracy-interpretability tradeoff, and tradeoffs between different definitions of fairness. No prior background is assumed - I will describe each tradeoff, highlight work from the past decade on each, and invite consideration of the role of these tradeoffs in our work.
- Supplements
-
--
|
10:30 AM - 11:00 AM
|
|
Break
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 12:00 PM
|
|
Epistemic Uncertainty, the AI Problem Understanding Gap and the Necessity of Structured Societal Context Knowledge for Safe, Robust AI
Donald Martin (Google, Inc.)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
The application of machine learning (ML) and artificial intelligence (AI) in high-stakes domains, such as healthcare, presents both opportunities and risks. One significant risk is the epistemic uncertainty of ML/AI developers, who often lack sufficient contextual knowledge about the complex problems they aim to address and the socio-technical environments in which their interventions will be implemented. Conversely, individuals from civil society who are most affected by these issues and are most vulnerable to the harms that AI systems can cause possess deep, qualitative contextual knowledge that is often overlooked and difficult to incorporate into product development workflows.
In this talk, Donald will introduce the problem understanding gap between civil society and AI product developers, which can lead to harmful outcomes. He will introduce community-based system dynamics (CBSD) as a way to bridge this gap and provide structural causal knowledge that can inform product development. CBSD involves working closely with communities to understand the dynamics of the problem being addressed, and leveraging this understanding to develop effective and contextually-appropriate solutions.
- Supplements
-
--
|
12:00 PM - 01:30 PM
|
|
Lunch
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
01:30 PM - 02:30 PM
|
|
Geometry of Deep Learning and Explainable ML
Anders Karlsson (University of Geneva)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
First, I will review neural networks and deep learning that lie behind the recent rise of AI. It is rather easy to explain the main ideas of this, but the questions how and why it works so well is a mystery. This black-box aspect is an important reason for many of the troubles AI is facing and the risks with this technology. In an attempt to understand deep learning better, I will introduce metrics in the neural networks and discuss tools in ergodic theory that then will be applicable, coming from a joint work with Benny Avelin. This concerns random products of transformations, which occurs in deep learning, in fact in several ways (random initialization, stochastic gradient descent and the drop-out procedure). Thanks to the basic nature of compositions of random maps, the second part of my talk could be of potential interest to some other non-ML topics of the program.
- Supplements
-
|
02:30 PM - 03:00 PM
|
|
Afternoon Tea
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
03:00 PM - 04:00 PM
|
|
Estimating and Controlling for Fairness via Sensitive Attribute Predictors
Jeremias Sulam (Johns Hopkins University)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
As the use of machine learning models in real world high-stakes decision settings continues to grow, it is highly important that we are able to audit and control for any potential fairness violations these models may exhibit towards certain groups. To do so, one naturally requires access to sensitive attributes, such as demographics, gender, or other potentially sensitive features that determine group membership. Unfortunately, in many settings, this information is often unavailable. In this talk, I will present recent work centering on the well known equalized odds (EOD) definition of fairness. In a setting without sensitive attributes, we first provide tight and computable upper bounds for the EOD violation of a predictor, precisely reflect the worst possible EOD violation. Second, we demonstrate how one can provably control the worst-case EOD by a new post-processing correction method. Our results characterize when directly controlling for EOD with respect to the predicted sensitive attributes is -- and when is not -- optimal when it comes to controlling worst-case EOD. Our results hold under assumptions that are milder than previous works, and we illustrate these results with experiments on synthetic and real datasets. Time permitting, I will also present recent results on interpretability of machine learning models, linking common notions of feature importance to well-understood and traditional statistical tests.
- Supplements
-
--
|
|
Aug 31, 2023
Thursday
|
09:30 AM - 10:30 AM
|
|
Manipulation-Robust Citizens' Assembly Selection
Bailey Flanigan (Carnegie Mellon University)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
Citizens' assemblies—a democratic paradigm where representatives are randomly-chosen citizens—are becoming increasingly mainstream. As these assemblies are used to make higher-stakes decisions, concerns emerge about volunteers manipulating the process of selecting participants. In particular, because selection algorithms must select volunteers based on their self-reported features, a volunteer could misreport their features to increase their chance of being chosen, decrease someone else's chance, and/or increase the expected number of seats given to their own group. While several selection algorithms have been introduced, their manipulability has never been considered.
In this talk, we examine what aspects of the selection process, including the selection algorithm, can be changed to limit such incentives. Strikingly, we show that Leximin — an algorithm that is widely used for its fairness—is highly manipulable. We then introduce a new class of selection algorithms that use $\ell_p$ norms as objective functions. We show that the manipulability of the $\ell_p$-based algorithm decreases in $O(1/n^{1-1/p})$ as the number of volunteers $n$ grows, approaching the optimal rate of $O(1/n)$ as $p \to \infty$. Our theoretical results are confirmed via experiments in eight real-world datasets.
- Supplements
-
--
|
10:30 AM - 10:35 AM
|
|
Group Photo
|
- Location
- SLMath: Front Courtyard
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
10:35 AM - 11:00 AM
|
|
Break
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 12:00 PM
|
|
Markov Chains and Redistricting
Sarah Cannon (Claremont McKenna College)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
12:00 PM - 01:30 PM
|
|
Lunch
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
01:30 PM - 02:30 PM
|
|
Complexity of Cake Cutting
Simina Branzei (Purdue University)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
02:30 PM - 03:00 PM
|
|
Afternoon Tea
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
03:00 PM - 04:00 PM
|
|
Fair Division Using Topological Combinatorics
Francis Su (Harvey Mudd College)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
|
Sep 01, 2023
Friday
|
09:30 AM - 10:30 AM
|
|
Values and Fairness Definitions: Classification, Networks, and Policy
Sorelle Friedler (Haverford College)
|
- Location
- SLMath: Online/Virtual
- Video
-
- Abstract
- --
- Supplements
-
--
|
10:30 AM - 11:00 AM
|
|
Break
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
11:00 AM - 12:00 PM
|
|
What We Owe Those For Whom We Build: Legal, Ethical and Practical Considerations for Engineering Responsibility in Machine Learning
Inioluwa Raji (University of California, Berkeley)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
12:00 PM - 01:30 PM
|
|
Lunch
|
- Location
- --
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
01:30 PM - 02:30 PM
|
|
Hidden Policy Choices in Modeling
Aaron Horowitz (American Civil Liberties Union)
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
02:30 PM - 03:00 PM
|
|
Afternoon Tea
|
- Location
- SLMath: Atrium
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
03:00 PM - 04:00 PM
|
|
Discussion
|
- Location
- SLMath: Eisenbud Auditorium, Online/Virtual
- Video
-
--
- Abstract
- --
- Supplements
-
--
|
|