In this Present Value post, ESI analyzes the effect of handing out listings of candidate recommendations outside of polling places, using work from May’s Philadelphia Municipal Primary done in conjunction with the Philadelphia Bar Association.
The full working paper can be downloaded here.
Leading up to Philadelphia’s Municipal Primary in May, we talked a lot about the election for Court of Common Pleas, a low attention race for ten-year judgeships with broad jurisdiction over Civil and Criminal trials, Family Court, and Orphans’ Court. We found that being in the first column of the ballot had the biggest effect on winning, bigger than being recommended by the Democratic Party, The Philadelphia Inquirer, or the Philadelphia Bar Association.
In May’s election, the pattern continued. With 27 candidates on the ballot for 9 judgeships, six of the newly elected judges came from the first column, two from the second, and one from the third. The Philadelphia Bar Association’s Judicial Commission rated 18 candidates as “Recommended” and 9 as “Not Recommended.” Nonetheless, three Not Recommended candidates were elected to the bench. They were all in the first column.
The Philadelphia Bar Association’s Recommendations
Before the election, ESI was approached by the Philadelphia Bar Association to measure the effect of an outreach strategy that they were considering: having volunteers hand out listings of their recommendations outside of polling places.
The Bar’s Commission on Judicial Selection and Retention releases official recommendations for all judicial elections. To develop its recommendations, the Commission surveys and interviews all judicial candidates. In addition, the Commission’s Investigative Division conducts a thorough investigation of each candidate who applies to be rated for a recommendation, including calling references and other individuals with experience working with candidates and reviewing publicly available information and writing samples. Candidates are then rated “Recommended” or “Not Recommended” by vote of the Judicial Commission, based on a set of specifically enumerated, publicly available standards. These standards include demonstrated legal ability, experience, and ethical track record. In addition, candidates who do not apply to be rated are automatically rated as Not Recommended. The Commission’s recommendations are broadly considered an apolitical measure of candidate quality; in 2013, The Philadelphia Inquirer stopped endorsing candidates on its own and began simply citing the Commission’s recommendations.
Handing out literature at polling places has a long history in Philadelphia politics. The City Democratic Party makes endorsements, but then local Democratic Ward leaders make their own endorsements, often bucking the City Party’s recommendations. One of their main avenues for their influence is having (potentially-paid) people stand outside of polling places with flyers listing a recommended candidate. State law allows such electioneering so long as it takes place more than ten feet from a polling place’s entrance. Our analysis here, though different because the flyers represented an independent non-partisan organization and those handing out listings were volunteers, may shed light on the value of those Ward flyers, as well.
There are two reasons we might expect handing out flyers to be particularly effective for the Common Pleas race. First, listings are given directly to voters as they are about to vote, so there isn’t any wasted effort on non-voters, and there is much less chance that voters will forget the information by the time they are in the booth. In an election where voters can select up to nine candidates, the cognitive load matters. Second, the Common Pleas race is a low-attention race, so voters are less likely to have strong opinions about candidates, and may be more open to information from third parties.
The Philadelphia Bar recruited enough volunteers for 41 polling places, while ESI randomized the polling places to which volunteers were assigned.
Why randomize? Randomization allows us to be sure that we are measuring the effect of the volunteers, rather than another feature of the neighborhoods. For example, suppose we didn’t randomize, but allowed volunteers to choose their own place to hand out literature. In this case, volunteers would likely go to polling places close to their homes. This would introduce two problems. First, we wouldn’t be able to distinguish between the effect of handing out papers and the fact that maybe the volunteers lived in systematically different neighborhoods from the rest of the city. If we measured that Recommended candidates did better in volunteer divisions than in others, would that be because of the listings, or because volunteers typically live in divisions where candidates were more likely to do well? We could control for observable confounders, such as income, race, and previous election results, but we would never be able to be sure that there wasn’t something about the divisions that made them more likely to vote for Recommended candidates, besides the listings.
Second, we would likely get a non-representative sample of the city. We want to be able to look at different parts of the city, and to know that the results are representative of the entire electorate.
The Results: Handing Out Listings Works
The results are in. We have measured the effect of the volunteers handing out listings by looking at a polling place’s gap between the average vote received by Recommended candidates and received by Not Recommended candidates. We compare that gap between the places with volunteers (“treated”) and those without (“control”). Handing out Recommendations increased the gap between Recommended and Not Recommended candidates by 0.41 percentage points. In the control group polling places, Recommended candidates won on average 3.87 percent of the vote, and Not Recommended candidates 3.38. In the Treatment group, Recommended candidates won 4.00, and Not Recommend won 3.11. This may seem like a small effect, but the last Not Recommended candidate to win was only 0.37 percentage points out of 10th place. In fact, if we limit the city to only the polling places that received treatment, 8 out of 9 elected judges would have been Recommended, instead of only 6.
Grey = Recommended by the Bar Association. S.E. are bootstrapped standard errors of the percent of vote. Prob(Win) is the bootstrapped probability of being ranked among the top 9 vote-getters.
Those city-wide results hide interesting differences among types of neighborhoods. We have divided polling places into four groups based on data from the American Community Survey: High Income (household median income > $50K), Moderate Income White, Moderate Income Black, and Moderate Income Hispanic, based on the race/ethnicity with the highest representation.
The effect of volunteers was largest in Moderate Income White and Moderate Income Hispanic polling places, where it increased the gap between Recommended and Not Recommended candidates by 0.71 and 0.66 percentage points, respectively. In Moderate Income Black polling places, handing out literature still had a positive, statistically significant effect, increasing the gap by 0.39 percentage points. Only in High Income polling places did the volunteers have a statistically insignificant effect, 0.20 percentage points, largely because voters there were already voting for Recommended candidates, even in the control group.
We don’t know why predominately-Black polling places might have had smaller effects than White and Hispanic ones. Maybe those neighborhoods already had other people from other organizations handing out sample ballots, mitigating the effect. Maybe voters there were less likely to assent to recommendations of Bar Association volunteers. Maybe volunteers there were less active, and less likely to engage voters walking through the door. We don’t know, though future research could test volunteer training to attempt to isolate the effect, and perform better in those divisions.
Could Strategically Placed Volunteers Swing an Election?
One way to assess the practical significance of this effect is to see if it is large enough to have made real world changes. Volunteers had an effect as high as 0.71 percentage points in moderate income White wards, and the last Not Recommended candidate won the overall election by only 0.27 percentage points. There are only so many of those divisions, however, and soon you get to divisions with lower effects. Suppose the Philadelphia Bar had already known the above results, and placed their volunteers in the optimal polling places. How many volunteers would it have taken to change the election outcomes?
We simulate those results by sorting polling places by the treatment effect times the number of voters. It is the polling places with the highest values in this metric for which placing a volunteer would most change the overall election outcome. Using this, the optimal polling places will most often be Moderate Income White, with some very-high-turnout Moderate Income Black neighborhoods mixed in. Hispanic polling places never had high enough turnout to swing an election—another problem for another project—and High Income polling places didn’t have a high enough treatment effect.
The results are humbling. With volunteers for 41 polling places, the number that the Bar Association had for this election, even optimally placed volunteers would have led to the exact same nine winners. Assigning 100 polling places would have led to the same winners, too. In fact, it would have taken enough volunteers for 532 polling places (69% of the city’s 773) to have changed any of the winners. With that many, we would have seen 2 Not Recommended judges, rather than 3.
Simulated Results for Optimally Placed Volunteers
Grey = Recommended Candidate
However, there is cause for hope. This was the first time the Bar Association undertook a polling place effort, and recruiting volunteers next time could be easier. With one attempt under their belt, refining volunteer training and trying other formats of listings could lead to higher treatment effects, and better results. And listings of recommendations don’t need to do all of the work alone. As part of a multi-pronged strategy, listings could be enough to push Recommended candidates over the top.
Jonathan Tannen, Ph.D., was a Director at Econsult Solutions, Inc (ESI). Jonathan’s dissertation research used GIS and large-scale computational techniques to develop a Bayesian method to measure the movement of neighborhood boundaries.