The Efficiency Gap Makes No Sense (but all metrics show Republicans are better at gerrymandering)
Cracking, Packing, and Wasted Votes
The efficiency gap is a technique for measuring the level of gerrymandering in a state. Gerrymandering generally takes the form of what’s known as “cracking and packing.” Cracking is the process of taking a district with a majority of the opposing party and splitting it into two in which the party has a minority in each, thus denying your opponent that seat. However, this isn’t always possible because your opponent might have too many votes in a given area. To reduce the remaining number of votes your opponent has while minimizing the number of seats they have, you can use a technique called packing, in which you create a district in which your opponent can win by a huge margin.
To measure both cracking and packing, we can use the concept of a wasted vote. A wasted vote is any vote that did not contribute to the victory. There are two kinds of wasted votes, cracked waste and packed waste. A cracked wasted vote for party X is any vote in a district party X has lost. A packed wasted vote is any vote for party X over 50% in a district party X has won.
For example, let’s say that there are just two parties^{1} and 1000 voters in a district. The number of wasted votes for X in terms of the number of votes for X is as such:
Note that the optimal cracking and packing to waste your opponent’s votes are at 49.9% and 100%, respectively. Basically you want every of your wins you have to be narrow and every loss to be huge.
Efficiency Gap
We calculate the efficiency gap as \(\frac{w_D  w_R}{v_D + v_R}\), where \(w_D\) and \(w_R\) are the number of wasted votes for the Democrats and Republicans, and and \(v_D\) and \(v_R\) are the total number of votes for the Democrats and Republicans. This is the efficiency gap in favor of the Republicans because a higher efficiency factor corresponds to more wastage for the Democrats.
Sounds pretty intuitive, right? Here’s why it makes no sense.
Let’s say you have a state with 100 voters and 20 districts. In this state, 80 voters are Republicans. The Republicans tries to arrange it such that they win all the seats instead of the 80 you’d expect them to win. And let’s say they do it the simple way, by cracking the 20% Democratic vote equally among all the districts (as above) so they have 80% of the vote in every district. Therefore, 30% of votes are wasted Republican votes (80%  50%) and 20% of votes are wasted Democratic votes (they waste all their votes). This means that the efficiency gap is 10% in favor of the Democrats.
What?
Efficiency Percentage Gap
The issue here is that the efficiency gap breaks down when there is a large difference between the vote totals of the two parties. To compensate, we can divide before subtracting rather than after. To ensure we have the same range of 50% to 50%^{2}, we divide by 2. We thus end up with the formula \(\frac12\left (\frac{w_X}{v_X}  \frac{w_Y}{v_Y}\right)\).
If we use the above example, we get that 37.5% of the Republican votes are wasted (30%/80%) and 100% of the Democratic votes are wasted (20%/20%). Thus, we have an efficiency gap of 31.3% in favor of the Republicans, as you might expect.
Does this make a difference when calculated against practical data?^{3}
This graph plots the gap towards the Republicans on both metrics. For example, Maryland is on the bottom left as a state gerrymandered in favor of the Democrats while North Carolina is on the top right as a state gerrymandered in favor of the Republicans.
Regardless of which metric we use, the general trend remains. Blue states tend to be gerrymandered blue while red states tend to be gerrymandered red, but there’s a clear winner: the Republicans, who seem to have gerrymandered most of the purple states and thus have gerrymandered the entire US (the largest circle).
Using the efficiency percentage gap does in fact make a difference in lopsided cases like the hypothetical 80% Republican state. Note that states that have a Democraticleaning efficiency gap but a Republican leaning efficiency percentage gap are red states (Arkansas, Kentucky) while states that have the opposite difference tend to be Democratic states (Massachusetts, Hawaii). In general, states that are below the line (more Republican leaning efficiency gap than Republican leaning efficiency percentage gap) tend to be blue states while states above the line tend to be red states.
In fact, party affiliation correlates more heavily with the efficiency percentage gap than with the efficiency gap, suggesting that it’s a better model for measuring gerrymandering.
Effectiveness gap
Remember our example with the highly partisan state? We said that the Republicans deserved 80% of the seats since they held 80% of the votes. Why not just base a gerrymandering measure on just subtracting the percentage of captured seats from the percentage of won votes?
I call this measure the “effectiveness gap” because rather than measuring efficiency (wasted votes) it directly measures the effectiveness of a map in producing seats for one party or another.
In our example, the Republicans got 80% of the votes but 100% of the seats, so there was a 20% effectiveness gap in favor of them.^{4}
It turns out that this measure is pretty much the same as the efficiency percentage gap in practice (even though it differed in this theoretical unrealistically lopsided state).
It looks like the efficiency percentage gap tends to somewhat overate the effectiveness gap in highly partisan states, but other than that, they track quite closely.
Potential problems with the effectiveness gap
One problem with the effectiveness gap is that it might be somewhat sensitive to turnout differences. For example, take this state which has 100 voters and 10 districts.
In this example, the map is clearly gerrymandered against Republicans, with an effectiveness gap of 64%40%=24% in favor of Democrats. However, let’s say that noncompetitive districts have 50% less turnout. Then we have an overall turnout of 80 votes, of which the Republicans get 44 and the Democrats get 36, for an effectiveness gap of 55%40% = 15%, a much smaller result.
We can adjust for this effect by acting as if every district had an equal turnout when calculating what percentage of votes a given party received. We call this measure the weighted effectiveness gap. Interestingly, this adjustment seems to have little to no effect on the existing map.
So the effectivenss gap seems to be fairly robust to this potential problem.
Why use the efficiency gap?
My honest answer is that I have no idea. I don’t see any reason why political scientists seem to prefer it over the (to me) much more intuitive effectiveness gap. There doesn’t seem to be any political reason since both models give the same high level picture of mostlyRepublican though some Democratic gerrymandering. My only possible explanation is that the efficiency gap appears to more closely measure gerrymandering since it explicitly measures packing and cracking. However, the efficiency percentage gap also does that, so I’m not sure that’s the right explanation.
Anyway, if you’re a political scientist, know the explanation for this, and feel generous with your time, you can contact me at my first name @berkeley.edu
.

The United States has two political parties. A vote for the Green, Libertarian, or American Independent Party is a wasted vote. But if you’re voting for the AIP (the party that nominated Wallace), you should waste your vote, so I guess I’m fine if you vote for the AIP. ↩

A minimum of 0% maximum of 50% of the total vote count can be wasted on either party. Thus the efficiency will always be somewhere between 50% and 50%. On the other hand, a party can waste 100% of it’s votes if it loses every seat and 0% if it barely wins every seat, so the efficiency percentage gap is found in the range 100% to 100%. ↩

I used raw data from this data source to calculate the total Democratic and Republican vote totals for every house race in 2016. I treated any uncontested races as if they were races in which the other party got 0 votes. I also just ignored all independents, even if they might have clearly had some affiliation. ↩

There’s no need to normalize by a factor of 2 because a party can’t increase it’s number of seats by more than 50% points through gerrymandering. At best, it can barely win every district it wins, doubling it’s representation. Thus, optimally, a party with just over 50% can control every seat, for a gain of 50%. A symmetric argument works for loss, so we have that the effectiveness gap is in the range of 50% to 50%. ↩
How You Get Homeopathy
Homeopathy is a system that has two primary claims. The first is the law of similars. It says that a dilute solution of a substance cures the ailment caused by a concentrated solution of the same substance. The second is the law of dilution, which says that the more dilute a homeopathic preparation is, the stronger its effect.
The second theory is the one people tend to focus on because it is clearly incorrect. The antihomeopathy \(10^{23}\) campaign is named after the order of magnitude of Avogadro’s number, that is the number of molecules of a substance in a 1M solution of a substance (typically a fairly strong solution, 1M saltwater is 5 teaspoons of salt in a liter of water). If you dilute by more than that amount, and many “strong” homeopathic preparations do, then with high probability you literally have 0 molecules of the active ingredient in your preparation. As the \(10^{23}\) campaign’s slogan makes clear, “There’s nothing in it” and homeopathic medicines are just sugar pills.
What is NP?
What is P?
This seems like a silly question. This question seems easily answered by the surprisingly comprehensible explanation on P’s Wikipedia page:
It contains all decision problems that can be solved by a deterministic Turing machine using a polynomial amount of computation time
OK, so it’s the set of all decision problems (yesorno questions) that are \(O(n^k)\) for some integer \(k\). For example, sorting, which is \(\Theta(n \log n)\) is P, since it’s also \(O(n^2)\).
But let’s go for an alternate definition. Let’s take a Clike language and get rid of a bunch of features: subroutine calls, while loops, goto, for loops with anything other than i < n
in the second field (where n
is the length of the input), and modifying the for loop variable within the loop. Effectively, this language’s only loping construct is for(int varname = 0; varname < n; varname++) {/* Code that never modifies varname */}
. Let’s call this language CFOR
.
Storing Data With the CircleTree
The Pair Abstraction
Computers need some way to store information. More specifically, we want to be able to store multiple pieces of data in one place so that we can retrieve it later. For now, we’ll look at the simplest way of storing data, a pair.
To express the concept of a “pair”, we need three machines.
 A way to put two things together (construction, or “C” for short)
 A way to get the first element (first, or “F” for short)
 A way to get the second element (second, or “S” for short)
CircleTree as a Calculator
Using CircleTree Drawings as a Calculator
So this isn’t just a procrastination game for doodling when you’re bored (and have a lot of colored pencils, for some reason). We can use circletree as a way to express math!
Numbers
For math, the first thing we need are numbers. I’m just going to draw some pictures which are defined to be the numbers. We don’t worry too much about whether they are “really” numbers or not in the way that we don’t worry about whether “102” or “CII” is a better way to explain the concept of “how many cents in a dollar plus how many hands a person has”ness.^{1}
So this is 0:

Although, off the record, for doing math CII is a terrible way of representing 102. ↩
Introducing the CircleTree System
If you want to follow along on paper, you’ll want some colored pencils. Like at least 5 colors, not including a pencil to draw links.
Circtrees
In any case, we’ll call the diagrams we’re interested in “circtrees.”^{1} Circtrees can be constructed in one of three ways.
1. Dots
A dot of any color is a circtree. Here are some examples^{2}:
Counting Monoids
What’s a Monoid?
A monoid is a set \(M\) with an identity element \(e : M\) and an operation \(* : M \to M \to M\), where we have the laws:
 \(x * e = x\)
 \(e * x = x\)
 \(x * (y * z) = (x * y) * z\)
We write a monoid as a tuple \((M, e, * )\). Some example monoids (written in pseudoHaskell) are
(Int, 0, (+))
(Int, 1, (*))
(Bool, True, ())
((), (), \x y > ())
([a], [], (++))
The Porus Mirror Problem, Part 2
Propagation
From last time, we know the quantity of light that escaped at each location and direction around the unit circle. We want to find how this translates into quantities of light at any point away from the circle. This situation can be modeled as so:
The Porus Mirror Problem, Part 1
The porus mirror problem
I’m pretty sure I came up with this problem on my own, but I’m not sure if its an existing problem under a different name. Anyway, the situation is as such:
 We have a very short cylindrical tube. In fact, we’ll ignore the third dimension and just use a circle. For simplicity, we’re using units such that the radius is 1.
 The tube is filled with a gas that emits corpuscles in one large burst. We assume a large enough number that we can model them continuously. We additionally assume that the emmissions are uniform in location and direction.
 The tube itself is semitransparent, with transparency determined by a function \(t: [0, 2\pi] \to [0, 1]\) of angle around the circle. Any light that does not pass through is reflected across the normal.
We wish to calculate at each point along the surface of the tube what is the total quantity of light emitted in each direction, from which we can easily calculate the light pattern this system will cast on any shape outside.
Dependent Types and Categories, Part 1
So I’ve been trying to go through Category Theory for Scientists by writing the functions as code. So I obviously decided to use Haskell. Unfortunately, I ran into a wall: Haskell is actually pretty bad at representing categories.
Haskell is a Category
Now, at first this was incredibly surprising. Frankly, I had started learning about category theory because of its association with Haskell. Unfortunately, I didn’t realize that Haskell isn’t category theory, it’s a category.
For instance, look at this definition of a category in math:
Chaos, Distributions, and a Simple Function
\(\newcommand{\powser}[1]{\left{ #1 \right}}\)
Iterated functions
The iteration of a function \(f^n\) is defined as:
\[f^0 = id\] \[f^n = f \circ f^{n1}\]
For example, \((x \mapsto 2x)^n = x \mapsto 2^n x\).
Most functions aren’t all that interesting when iterated. \(x^2\) goes to \(0\), \(1\) or \(\infty\) depending on starting point, \(\sin\) goes to 0, \(\cos\) goes to something close to 0.739. \(\frac{1}{x}\) has no well defined iteration, as \((x \to x^{1})^n = x \to x^{(1)^n}\) which does not converge.
Haskell Classes for Limits
OK, so the real reason I covered Products and Coproducts last time last time was to work toward a discussion of limits, which are a concept I have never fully understood in Category Theory.
Diagram
As before, we have to define a few preliminaries before we can define a limit. The first thing we can define is a diagram, or a functor from another category to the current one. In Haskell, we can’t escape the current category of Hask, so we instead define a diagram as a series of types and associated functions. For example, we can define a diagram from the category \(\mathbf 2\) containing two elements and no morphisms apart from the identities as a pair of undetermined types.
A diagram from the category \(\mathbf 2*\), a category containing two elements and an arrow in each direction (along with the identities), (this implies that the two arrows compose in both directions to the identitity):
Haskell Classes for Products and Coproducts
Product Data Structures
Product Types are a familiar occurrence for a programmer. For example, in Java, the type int
\(\times\) int
can be represented as:
class IntPair {
public final int first;
public final int second;
public IntPair(int first, int second) {
this.first = first;
this.second = second;
}
}
Monoids, Bioids, and Beyond
Two View of Monoids.
Monoids
Monoids are defined in Haskell as follows:
class Monoid a where
m_id :: a
(++) :: a > a > a
Monoids define some operation with identity (here called m_id
). We can define
the required laws, identity, and associativity, as follows:
monoidLaw1, monoidLaw2 :: (Monoid a, Eq a) => a > Bool
monoidLaw1 x = x ++ m_id == x
monoidLaw2 x = m_id ++ x == x
monoidLaw3 :: (Monoid a, Eq a) => a > a > a > Bool
monoidLaw3 x y z = (x ++ y) ++ z == x ++ (y ++ z)
Nontrivial Elements of Trivial Types
We know that types can be used to encode how to make useful data structures.
For example, a vehicle can be encoded as:
data Vehicle = Car { vin :: Integer, licensePlate :: String }
 Bike
In this case, valid values include cars with VINs and LPNs, or Bikes, which have neither of these data points.
However, some data types can seem far less useful.
The Autogeneration of Landscapes
OK, so a break from theory and CS, and to applications. How do we produce reasonably realistic landscapes using differential equations?
Mountains and Valleys
To simulate mountains and valleys, we can use a fairly simple algorithm that adds a random amount to a region, then recursively calls itself on each quadrant of the region. Afterwards, an image generally looks something like:
We can apply a Gaussian blur to the image to get a reasonably realistic depiction of mountains and valleys.
We can define the elevation as a function \(E(x, y, t)\). Representing this as a landscape:
CStyle Pointer Syntax Considered Harmful
Declaration Follows Use
In C, a pointer to int has type int*
, but really, this isn’t the right way of looking at it. For example, the following declaration:
int* x, y;
creates x
of type int*
and y
of type int
.
The Triviality Norm
Feynman famously said that to a mathematician, anything that was proven was trivial. This statement, while clearly a lighthearted dig at mathematics, it does have a striking ring of truth to it.
Logical Implication
In logic, there is the concept of implication. The idea is that if \(B\) is true whenever \(A\) is true we can say that \(A \implies B\). This makes sense for statements that depend on a variable. For example, we can say that \(x > 2 \implies x > 0\).
A Monadic Model for Set Theory, Part 2
\(\newcommand{\fcd}{\leadsto}\)
Last time, we discussed funcads, which are a generalization of functions that can have multiple or no values for each element in their input.
Composition of Funcads
The composition of funcads is defined to be compatible with the composition of functions, and it takes a fairly similar form:
\[f :: Y \fcd Z \wedge f :: X \fcd Y \implies \exists (f \odot g :: X \fcd Z), (f \odot g)(x) = \bigcup_{\forall y \in g(x)} f(y)\]
In other words, map the second function over the result of the first and take the union. If you’re a Haskell programmer, you probably were expecting this from the title and the name “funcad”: the funcad composition rule is just Haskell’s >=>
.
A Monadic Model for Set Theory, Part 1
\(\newcommand{\fcd}{\leadsto}\)
Solving an Equation
A lot of elementary algebra involves solving equations. Generally, mathematicians seek solutions to equations of the form
\[f(x) = b\]
The solution that most mathematicians come up with to this problem is to define left and right inverses: The left inverse of a function \(f\) is a function \(g\) such that \(g \circ f = \mathrm {id}\). This inverse applies to any function that is onetoone. There is also a right inverse, a function \(g\) such that \(f \circ g = \mathrm {id}\) For example, the left inverse of the function
\[x \mapsto \sqrt x :: \mathbb R^+ \rightarrow \mathbb R\]
is the function
\[x \mapsto x^2 :: \mathbb R \rightarrow \mathbb R^+\]
What is Category Theory?
\(\newcommand{\mr}{\mathrm}\) So, I was reading Category Theory for Scientists recently and kept having people ask me: What is category theory? Well, until I hit chapter 34, I didn’t really have an answer. But now I do, and I realize that to just understand what it is doesn’t really require understanding how it works.
Sets and functions
So, set theory is all about sets and functions. Why are we talking about set theory and not category theory, you may ask? I’ll get to category theory eventually, but for now, let’s discuss something a little more concrete.
Let’s look at subsets of the set \(\{a, b\}\). We have the possibilities, which we name Empty, A set, B set, Universal set:
\[E = \{\}, A = \{a\}, B = \{b\}, U = \{a, b\}\]
Stupid Regex Tricks
Regexes, the duct tape of CS
Regexes are truly amazing. If you haven’t heard of them, they are a quick way to find (and sometimes replace) some body of text with another body of text. The syntax, while it has its critics, is generally pretty intuitive; for example a zip code might be represented as
[09]{5}(\s*\s*[09]{4})?
which is read as “a five digit code optionally followed by a spaced dash and a four digit code”. Some whitespace and named groups might help the reading (use COMMENTS
mode when parsing)
(?P<main_code>
[09]{5}
)
( # optional extension
\s*\s* # padded dash
(?P<optional_extension>
[09]{4}
)
)?
Dictp Part 2
OK, so to continue the Dictp saga, some syntactic sugar.
References to local variables.
The parser can be enhanced by rewriting anything that isn’t a builtin command or a string with a lookup to __local__
.
E.g.,
a
\(\mapsto\)__local__['a']
0 = 1
\(\mapsto\)__local__['0'] = __local__['1']
0?
\(\mapsto\)__local__['0']?
'0'
\(\mapsto\)'0'
Dictp Part 1
So, a friend and I (I’m going to protect his anonymity in case he doesn’t want to be associated with the travesty that is what we ended up thinking up) were discussing ridiculous ideas for programming languages: you know the deal, writing C interpreters in Python, using only integers to represent everything, etc.
One idea, however, seemed a little less ridiculous to us (OK, just to me): a language in which there are two data types: Dictionary and String.
Infsabot Strategy Part 2
OK, so to continue our Infsaboting.
Correction
I made a mistake last time. I included SwitchInt
, a way to switch on values of ExprDir
, which is ridiculous since an ExprDir
can only be constructed as a constant or an if
branch to begin with.
So just imagine that I never did that.
Guess and Check
OK, so how should our AI construct these syntax trees? At a very high level, we want to be able to 1) assess trees against collections of trees and 2) modify them randomly. We can represent this as a pair of types:
asses :: [RP] > RP > Double
modify :: RP > StdGen > (RP, StdGen)
I think the implementation of asses
should be pretty clear: simply find the win rate (given that we can simulate an unlimited number of games).
Modify, on the other hand, is a little more complicated. There are a few ways a tree can be modified:
 Modify the value of a constant to be something else.
 Modify the value of a field to point to something else or a field
 Add leaves to the tree
 Remove leaves as a simplification step
Infsabot Strategy Part 1
OK, so what is this Infsabot? Infsabot is a game I designed for people to write programs to play. Basically, it’s like chess, except where each piece acts independently of each other.
The Game
There is a square board where each square is either empty or contains material. The game pieces, robots, require material to stay alive and perform tasks. At each turn, every robot reports which of the following it wants to do:
 Noop: The robot does nothing
 Die: The robot dies
 Dig: The robot digs to try to extract material
 Move: The robot can move in one of the four compass directions
 Fire: The robot can fire a quickmoving projectile (e.g., travel time = 0) in a compass direction
 Message: The robot can send a quickmoving message in a compass direction
 Spawn: The robot can create a new robot with its own initial material and appearance
To decide what to do, a robot has access to two things:
 A picture of the positions around it, to a given distance
 Whether or not the spot contains material
 Whether or not the spot contains a robot, and it’s appearance (not it’s team)
 A snapshot of its own hard drive, which is a massive relational database between Strings and Strings
 A list of all messages received
What's a number?
OK, so a bit of a silly question. Of course, we all know what numbers are. But humor me, and try to define a number without using “number”, “quantity”, or anything that refers back to a number itself.
But, before we learn numbers, we have to first understand something simpler: counting. Couldn’t we define numbers in terms of counting? Well, yes, otherwise I wouldn’t be writing this article.
Now, to not end up with a forty page book, I’m only going to go over how to define the whole numbers (I’m going to ignore negative integers, fractions, \(\pi\), etc.)
Recursivized Types
So, a short break from \(\varepsilon—\delta\).
Anyway, I’m going to look at Algebraic types in this section. Specifically, I am going to use Haskell types.
The provided code is all actual Haskell, in fact, I’m going to have to deimport the standard libraries because some of this is actually defined there. You can find all the code here
Who Needs Delta? Defining the HyperReal Numbers
\(\renewcommand{\ep}{\varepsilon}\) \(\renewcommand{\d}{\mathrm d}\)
In high school Calculus, solving differential equations often involves something like the following:
\[\frac{\d y}{\d x} = \frac{y}{x}\]
\[\frac{\d y}{y} = \frac{\d x}{x}\]
\[\int \frac{\d y}{y} = \int \frac{\d x}{x}\]
\[\log y = c + \log x\]
\[y = kx\]
Now, I don’t know about anyone else, but this abuse of notation really bothered me at first. Everyone knows that \(\d x\) isn’t an actual number by itself, but we seem to use it all the time as an independent variable.
Ancient Greek Abstract Algebra: Introduction
Note: I’ve put the Ancient Greek Abstract Algebra series on hold for now. Please take a look at some of my other post(s).
Modern constructions of
The natural numbers are taught to children as a basic facet of mathematics. One of the first things we learn is how to count. In fact, the name “Natural Numbers” itself implies that the natural numbers are basic.
However, modern mathematicians seem to have trouble with accepting the natural numbers as first class features of mathematics. In fact, there are not one but two common definitions of the naturals that can be used to prove the Peano Axioms (the basic axioms from which most of number theory can be proven).