Graduate student Mary Barker and a group of mathematicians help calculate whether political districts have been drawn fairly or gerrymandered for political gain.

When Massachusetts Governor Elbridge Gerry set out to redraw the state’s political map in 1812, he used district boundaries to group his opponents together, confining their voting power to a limited set of districts. The effort worked, and he helped his party win the 1812 state senate election. One district, near Boston, was so contrived and mangled that critics said it resembled a salamander. Thus, the term “gerrymander” was born.

Two centuries later, gerrymandering has become a common practice within our national political landscape. Tactics include drawing lines so that certain voters are packed into a limited number of districts, as well as breaking up groups of voters so they can’t gain political foothold by forming a voting bloc. The shapes of political districts have been the subject of several recent federal court cases, with judges being asked to determine which shapes are fair. To get an unbiased take on redistricting maps, and due to the mathematical nature of shapes, the legal system is increasingly turning to mathematicians and statisticians.

This summer, mathematics and statistics graduate student Mary Barker set out for Boston to develop mathematical and computer science techniques to aid in gerrymandering litigation. Barker participated in a six-week program at the Voting Rights Data Institute under the Metric Geometry and Gerrymandering Group (MGGG) – a large team of computer scientists and mathematicians from Tufts, Harvard, and MIT – aimed at uncovering the mathematical underpinnings of districting shape “fairness.”

“There is no ultimate metric for fairness,” said Barker, a second-year graduate student. But, she explained, given a districting plan and a set of rules and objectives for districting, any given map can be compared with similar maps, produced by computer simulations, to determine if the map in question is probabilistically an outlier.

Two important metrics that are used in this comparison are mean-median score and efficiency gap. Mean-median score measures how far below 50 percent of votes the controlling party could lose while still maintaining half of the representation. The efficiency gap measures seats versus votes – namely, how well the seated representatives match the number of votes for each party in a particular district.

In addition to finding and formatting voting and map data, Barker was tasked with developing Markov chain Monte Carlo methods (MCMC) for districting map simulations. The simulations work like this:

- Enter a district map, and paint the districts with different colors.
- Tesselate the map with small triangles (think mosaic of different sized triangles, where the triangles on district borders can be two different colors).
- For each triangle that is not a single color, flip a coin; if the coin is heads, paint the triangle all one color, and if tails the other color.
- Do this, say, a billion times to get a new map (this is the Monte Carlo part).
- Repeat steps 1-4 a trillion times to get a menagerie of maps.

In essence, this process amounts to randomly exploring the universe of the different possible districting plans. It would be computationally unreasonable to enumerate all such plans, but this algorithm yields a reasonable sample size. Barker explains that we can use these maps to aid in understanding how likely it is for any given map to exhibit a certain characteristic. With a trillion maps in hand, it is possible to compare the initial map to its neighbors in the universe of maps (this is the Markov chain part). Probabilistic arguments can then be made to determine if the initial map is an outlier. If so, it’s a good indication that the map is biased.

Why is this a good indication? To answer this question, Barker points out that it is important to note that our coin is not fair. In fact, the coin has a lot of information: It can be situationally programmed to consider population and contiguity restrictions, state redistricting rules, efficiency gap, mean-median score, and even party objectives. In turn, given a set of rules, the simulation is robustly informed and indicates which maps are probable.

For example, Barker explains, last year in Pennsylvania, the state’s Republicans were taken to court by the League of Women Voters for the fairness of a Republican map drawn in 2011. A member of the MGGG was hired to complete an analysis of the map. The court found the analysis convincing enough to deliver a judgment in accord with the findings of the MGGG, stating that the Republican map was unfair. The MGGG went on to analyze a replacement map, which was approved by the court. This new map will be used in the upcoming midterm elections and has harsh implications for Republicans: It’s speculated that up to four seats will swing to Democrats. In a state where registered Republican voters make up about 38 percent of all registered voters, yet control roughly 72 percent of the state delegation (13 of 18 congressional seats), the new map is certainly a step in the direction of mathematical fairness. Moving forward, the group aims to provide open source code for states to use on their own redistricting plans and to develop more informed districting quantification techniques.

“It’s difficult to mathematically decide fairness,” said Barker. “But setting a precedent for using the inherent unbiasedness of mathematics to help arbitrate gerrymandering cases should be exciting for all citizens – mathematician or otherwise.”