Home /  MMD Seminar: "Fairness in Kidney Exchange Programmes" & "How Good Are Privacy Guarantees? Platform Architecture and the Learning-Privacy Tradeoff"

Seminar

MMD Seminar: "Fairness in Kidney Exchange Programmes" & "How Good Are Privacy Guarantees? Platform Architecture and the Learning-Privacy Tradeoff" October 04, 2023 (01:30 PM PDT - 03:00 PM PDT)
Parent Program:
Location: SLMath: Eisenbud Auditorium, Online/Virtual
Speaker(s) Péter Biró (KRTK – Institute of Economics), Alireza Fallah (Massachusetts Institute of Technology)
Description No Description
Keywords and Mathematics Subject Classification (MSC)
Primary Mathematics Subject Classification No Primary AMS MSC
Secondary Mathematics Subject Classification No Secondary AMS MSC
Video

Fairness in Kidney Exchange Programmes

How Good Are Privacy Guarantees- Platform Architecture and the Learning-Privacy Tradeoff

Abstract/Media

"How Good Are Privacy Guarantees? Platform Architecture and the Learning-Privacy Tradeoff" - Alireza Fallah      

Abstract: Many platforms deploy data collected from users for a multitude of purposes. While some are beneficial to users, others are costly to their privacy. The presence of these privacy costs means that platforms may need to provide guarantees about how and to what extent user data will be harvested for activities such as targeted ads, individualized pricing, and sales to third parties. In this work, we build a multi-stage model in which users decide whether to share their data based on privacy guarantees. We first introduce a novel mask-shuffle mechanism and prove it is Pareto optimal—meaning that it leaks the least about the users’ data for any given leakage about the underlying common parameter. We then show that under any mask-shuffle mechanism, there exists a unique equilibrium in which privacy guarantees balance privacy costs and utility gains from the pooling of user data for purposes such as assessment of health risks or product development. Paradoxically, we show that as users’ value of pooled data increases, the equilibrium of the game leads to lower user welfare. This is because platforms take advantage of this change to reduce privacy guarantees so much that user utility declines (whereas it would have increased with a given mechanism). Furthermore, we show that platforms have incentives to choose data architectures that systematically differ from those that are optimal from the user’s point of view. In particular, we identify a class of pivot mechanisms, linking individual privacy to choices by others, which platforms prefer to implement and which make users significantly worse off. 

Based on joint work with Daron Acemoglu, Ali Makhdoumi, Azarakhsh Malekian, and Asu Ozdaglar.

   

No Notes/Supplements Uploaded

Fairness in Kidney Exchange Programmes

How Good Are Privacy Guarantees- Platform Architecture and the Learning-Privacy Tradeoff