Home /  Rotor-routing, smoothing kernels, and reduction of variance: breaking the O(1/n) barrier

Seminar

Rotor-routing, smoothing kernels, and reduction of variance: breaking the O(1/n) barrier January 24, 2012 (04:00 PM PST - 05:00 PM PST)
Parent Program: --
Location: SLMath: Eisenbud Auditorium
Speaker(s) James Propp (University of Massachusetts)
Description Click here to view the slides from this talk: Slides

Keywords and Mathematics Subject Classification (MSC)
Primary Mathematics Subject Classification
Secondary Mathematics Subject Classification
Video
No Video Uploaded
Abstract/Media

If a random variable X is easier to simulate than to analyze, one way to estimate its expected value E(X) is to generate n samples that are distributed according to the law of X and take their average. If the samples are independent, then (assuming X has finite variance) the estimate will have typical error O(1/sqrt(n)). But often one can do better by introducing appropriate forms of negative dependence. In the toy context of simulating Markov chains to estimate their absorption probabilities, I\\'ll describe a scheme that uses maximally anticorrelated identically distributed Bernoulli random variables (aka rotor routers) and has typical error O((log n)/n), and a related scheme with typical error O(1/n). This might seem to be optimal, and indeed one cannot expect the average (X_1+...+X_n)/n to differ from its expected value E(X) by less than O(1/n). However, by using weighted averages that assign X_i less weight when i is near 1 or n and greater weight when i is near n/2, one can get estimators for E(X) with typical error significantly smaller than O(1/n).
The methods and ideas are mostly probabilistic and combinatorial. No prior knowledge of rotor-routing or smoothing kernels, and no familiarity with (or fondness for) statistics, will be assumed.

No Notes/Supplements Uploaded No Video Files Uploaded