Seminar
Parent Program: | |
---|---|
Location: | SLMath: Online/Virtual, Baker Board Room |
In the United States, a now widely-accepted legal framework for judging the fairness of a political redistricting map is to compare it to an ensemble of randomly generated alternatives. But what should we mean by a "random" map, and how can we efficiently generate one? There are a family of algorithms for this task all based on the following idea: draw a uniformly random spanning tree on the graph of indivisible geographic units (e.g., precincts or census blocks), find edge(s) that split the tree into sub-trees of roughly equal size, and then take the graph partition induced by the sub-trees to be your redistricting map. These algorithms are very efficient in practice and have been successfully used in several high-profile court cases. However, numerous graph-theoretic and combinatorial questions about these algorithms remain open, which are of both theoretical and practical interest. In this talk I will discuss recent progress on some of these fronts, including:
- An approximate relationship between the "compactness" of a map and the probability of the map being sampled
- A 1/poly(n) lower bound on the probability that a random spanning tree of a grid graph can be split into exactly equal-sized pieces
- Obstructions to MCMC algorithms on grid graphs with many small districts, which can be thought of as a polyomino tiling problem
No prior knowledge of redistricting will be assumed; I will explain the setting and algorithms from scratch.