Simulating the Court of Common Pleas Election

The 2017 Philadelphia Primary election is May 16th. In prior posts, we’ve looked into the dynamics of an often-overlooked race: judges for the Court of Common Pleas. In a few short days, voters will elect judges to serve ten-year terms in the state court with broad jurisdiction over Civil and Criminal trials, Family Court, and Orphans’ Court. In today’s post, I use our statistical findings to run simulations of the upcoming Common Pleas election and make some predictions.

Predicting Turnout

First, how many voters will show up on May 16th? Below is a plot from my earlier post about turnout (Figure 1) showing the total number of voters in each Democratic Primary, estimated as the total votes for the highest-vote race. The 2017 election is a District Attorney election, meaning it’s one of the elections along the red line at the bottom. For the last D.A. Democratic Primary, in 2013, only 66,000 Philadelphians voted – the lowest since 2004. I would be surprised if voter turnout is that low this year, given the highly-competitive D.A. race and recent political engagement. A reasonable estimate would be 100,000, the number of voters who turned out in 2009 when Seth Williams first won the D.A. race, a year when Democrats were energized by Barack Obama’s election to the presidency. There has been a lot of buzz about strong Democratic turnout in light of the party’s newfound energy, but does this mean turnout could be even higher? The 150,000 voter turnout for competitive gubernatorial elections does not seem implausible. We’ll see. For now, I will stick with a guess of 100,000 voters on May 16th.

Fig. 1


The total number of votes cast for Common Pleas judges is higher because each voter is allowed to vote for more than one candidate (in 2017 each voter will be allowed to select up to 9). Interestingly, the number of judges that the average person votes for doesn’t obviously correlate with the maximum they are allowed to pick: the average number of Common Pleas votes per voter has ranged from 4.2 to 5.3 in the last four elections, even as the number of vacancies has ranged from 6 to 12. This data is represented in Figure 2. It seems like voters just push the button for about 5 candidates and stop. For the 2013 and 2009 District Attorney elections, that average was around 4.5, so that is the number I will use.

Fig. 2


Ballot Position in this Election

Candidate ballot position for the Court of Common Pleas matters. To determine position, candidates’ names are drawn at random in a lottery. I have shown in a prior post that being placed in the first column more than doubles a candidate’s number of votes. How will ballot position play out this year?

The Court of Common Pleas ballot is awfully tall this year. In 2015, with 43 candidates, there were 7 rows and 7 columns.[1] In 2013, with 24 candidates, 6 rows and 4 columns. In 2011, with 34 candidates, 6 rows and 7 columns.


2015 Court of Common Pleas Sample Ballot

In 2017, with 27 candidates on the ballot [2], there will be 11 rows and 3 columns. This makes for a taller and skinnier ballot than any we’ve seen in recent elections and means that more candidates will receive the “first-column” boost. (Does this also mean that the “first-column” bonus will on average be smaller than before? We won’t know until we have the results.)

2017 Sample Ballot

2017 Court of Common Pleas Sample Ballot

The Role of Gender

In the previous analyses, I did not include candidates’ gender. It is an easy fix, so I have updated our model to include gender (more accurately, presumed gender), using the candidate’s name. [3] In the last four Common Pleas elections, candidates with presumed-female names received 23% more votes than those with presumed-male names, controlling for ballot position and endorsements. This comports with the premise that there is a group of voters who simply push the button for every woman on the ballot. With this data, we cannot distinguish whether every voter is a little bit more likely to vote for women, or whether there are a few voters who are much more likely to do so.

Simulating the Election

To simulate the election, I use our results statistically predicting a candidate’s votes based on (1) ballot position, (2) Inquirer/Philadelphia Bar Association Judicial Commission endorsement, (3) Democratic City Committee endorsement (DCC), and (4) presumed gender. The model leaves some uncertainty in the final vote—we cannot perfectly predict the votes knowing only these four variables—so I randomly sample that uncertainty, and count the winners. This uncertainty, in the real world, could be due to money spent on campaigning, name recognition, ward endorsements, and so on—factors we do not include in our model.

The purpose of these simulations is not to predict exactly who will win but to demonstrate how the patterns that we have identified in past elections will play out in the upcoming election, given the precise layout of the ballot and the specific combination of endorsements.

I think it is a reasonable assumption that the average voter knows absolutely nothing about Common Pleas candidates when they enter the voting booth. This actually makes the election much easier to predict. We do not have to account for vagaries of public opinion, or for a candidate’s personality and political position platform. Instead, we can expect to do fairly well using only “structural” features of the election: ballot position, endorsements, and gender. The model tells us how big the error in a candidate’s total typically is. We randomly generate errors of the right size, which allows candidates in some simulations to do much better or worse than expected. But when structural features dominate the process, the size of those errors will be smaller, and our predictions will be more certain.

I have simulated the election 10,000 times, with nine vacancies and 27 candidates on the ballot. We predict the individual winners—by name—but to avoid affecting the election, I will not discuss names. Rather, I will focus on illustrative summary statistics.

In the results, the ninth place vote-getter received on average 4.3% of the vote. Assuming 100,000 voters and that the average voter picks 4.5 judicial candidates, there will be 450,000 total votes for judges. The ninth place winner on average receives 19,470 votes. The tenth place candidate—the first loser—receives on average 18,580 votes (4.1%). The difference between 9th and 10th place is on average just 890 votesIf your vote matters in any election, it’s this one.

Fig. 3


The number of votes required to win varies from simulation to simulation. The tenth place candidate (which is the vote count to beat) receives fewer than 20,240 votes in 90% of the simulations. Thus, an ambitious Win Count, defined as the number of votes required to guarantee victory in 90% of our simulations, is that 20,240 vote total. Figure 4 shows the distribution of votes received by tenth place over the simulations (it is the same distribution as the tenth place candidate in Figure 3, rotated and zoomed in).

Fig. 4


We can also use the simulation results to see how various effects play out.  Ballot position matters: on average, 6.0 winning candidates came from the first column, 2.5 from the second column, and only 0.6 from the last column. (These counts do not add up to 9 due to rounding).

The Inquirer has, in the past two cycles, adopted the Philadelphia Bar Association Judicial Commission’s recommendations, and I assume they will do so again this year. In our simulations, an average 6.6 candidates endorsed by the Inquirer/Philadelphia Bar win judgeships. This number appears high—it is certainly a large fraction of nine seats—but that is because the Commission rates most candidates as qualified, more candidates than there are vacancies. The Commission has rated 18 out of 27 candidates qualified for the upcoming election, even though only nine can win. If the endorsement had no effect, we would still expect six recommended candidates to win. A better way to look at this result, then, is to realize that even with not-qualified being a rare occurrence, on average 2.4 unqualified candidates become a judge. At least one unqualified candidate won in 99% of simulations (this is mainly because there are two “not qualified” candidates in the first column that received DCC support).

The unqualified candidates won largely because of the Democratic Party and ballot position. Of those unqualified candidates who win, 67% were in both the first column and endorsed by the DCC, 27% were only endorsed by the DCC, and 6% were only in the first column (it is essentially impossible to win without any of the various effects). Having completely unendorsed candidates win was not as rare as one might think; 22% of our simulations resulted in at least one candidate who won without a Philadelphia Bar Association or a DCC endorsement. Virtually all of these winners (99.8%) were in the first column.

Fig. 5


The Election is May 16th

On May 16th, nine candidates will win Common Pleas judgeships. These simulations make some clear predictions. Most of those winners will be from the first column of the ballot. At least one unqualified candidate will be voted to serve a 10-year term. Will these predictions bear out? We have a newly engaged electorate and an oddly tall Common Pleas ballot. We will soon see.

[1] The count of candidates does not include candidates who removed their name from the ballot, and were blank spaces on the ensuing ballot.

[2] The blank spaces are candidates whose names have been removed from the ballot.

[3] I define presumed gender as the gender that an uninformed reader assumes the name represents. There are some rigorous ways to model this using, for example, social security data, but for this blog post I approximated presumed-gender using a non-scientific poll of ESI employees. I prefer to use readers’ guesses at gender rather than looking up how the candidate actually identifies because the electorate likely will have no real experience with the candidate, and will be guessing in the same way.


Jonathan Tannen, Ph.D., Jonathan Tannen, Ph.D., is a Director at Econsult Solutions, Inc (ESI). Jonathan’s dissertation research used GIS and large-scale computational techniques to develop a Bayesian method to measure the movement of neighborhood boundaries.was previously a Director at Econsult Solutions, Inc (ESI). Jonathan’s dissertation research used GIS and large-scale computational techniques to develop a Bayesian method to measure the movement of neighborhood boundaries.

Share This