Election Accuracy

 

Measuring Election Accuracy

One of the most important criteria we use to evaluate voting methods is "Accuracy," but how do we determine if a voting method is accurate? Does it elect the candidates who should win? Are the winners representative?

There are many tools that researchers use to answer these questions, and, like all lovers of the scientific method, we advocate taking a close look from multiple perspectives.

 

Quick Glossary

Some voting methods have several different names used by different researchers. To help understand the various charts we’ll reference, here are a few of those methods starting with the name we’ll use followed by some of their alternatives.

  • Choose One Voting: Plurality, First Past The Post (FPTP)
  • Score Voting: Range Voting (sometimes this term is used to refer to Dr. Warren Smith’s specific Score Voting proposal)
  • Ranked Choice Voting (RCV): Instant Runoff Voting (IRV), Single-winner Hare, Ware’s method

 

Evaluating Specific Election Outcomes

For evaluating the accuracy of individual elections, when voter preferences are known, one approach is to find the "Condorcet Winner," the candidate who would beat all others in a head-to-head race, if one exists. For voting methods with a more expressive ballot, looking for the highest scoring or highest rated candidate is another common-sense approach. Preference order and level of support can be thought of as measuring quantity of voters and quality of support respectively. The best results are found when both are considered in tandem. In cases where both agree, the election result was certainly correct. When they disagree, or when there are ties, the question of who should have won may become a philosophical debate.

An ideal candidate may not exist, but if they do, it is generally accepted that they would be the candidate who is closest to the ideological center of the electorate, or the candidate who would make as many voters as possible as satisfied as possible with the election outcome.


Statistical Modeling

For comparing the accuracy of various voting methods across multiple elections, most voting scientists turn to statistical modeling of simulated elections. "Bayesian Regret" (1) is one such model and the Ka-Ping Yee Diagrams (2, 3), which are explained in this video from Equal Vote founder Mark Frohnmayer (4), are another. One of the most sophisticated and realistic is "Voter Satisfaction Efficiency" (5), from Harvard PhD in Statistics Dr. Jameson Quinn, which has since been refined even further (6). Quinn was completing his PhD in Statistics from Harvard and was Vice Chair at the Center for Election Science at the time this study came out in 2017. He joined the Equal Vote Coalition board of Directors in 2020. Modeling from John Huang, (7, 8) and others (9, 10, 11, 12) represent newer additions to the body of evidence showing that STAR Voting does particularly well in larger fields of candidates. It's interesting to note that though the specific numbers may differ slightly in some cases, all this data is in general agreement as to the relative conclusions comparing voting methods. Collectively, this body of simulations represent an invaluable addition to the data we can collect from real world elections.

 

Real World Elections

Elected officials and reform advocates who are considering adopting a new voting method may prefer to wait until the method has been "used in the real world" some number of times. This is helpful for looking at considerations beyond "accuracy," such as implementation logistics, voter education campaigns, and more, but reformers may find themselves in an impossible dilemma if they are waiting for real world elections to prove that a given method "works" by this definition.

Real world, empirical election data is one source of information, but unfortunately, it has its limitations.

"In discussions comparing election methods, people often argue for one method or another by presenting examples of cases where a particular method fails or behaves strangely. There are five commonly cited criteria (called universality, non-imposition, non-dictatorship, monotonicity, and independence of irrelevant alternatives) for "reasonable behaviour" of an election method. But it has been mathematically proven that no [ranked] single-winner election method can meet all five of these criteria (13), so one can always invent situations where a particular method violates one of these criteria. Thus, presenting individual cases of strange behaviour proves little." - Ka-Ping Yee

The fact is that no voting method is perfect 100% of the time, and any method will yield the correct result in elections that are not competitive. For this reason, a single election, or even hundreds or thousands of elections, may not represent a statistically relevant sample. In much of the world, elections are two-party dominated, and many elections don't have multiple competitive candidates or parties. Every voting method will elect the majority preferred winner in a two-candidate race, so election data comparing voting methods that doesn't include a robust sample may give a false sense of security.

Further complicating the issue, the Choose One ballot used in most public elections doesn't give us much data to go on. These ballots are not expressive enough to collect the voters' full opinions, and we have no way to definitively determine if the votes cast were honest or dishonest. We also have no way of knowing if factors like vote splitting and the Spoiler Effect distorted the election outcome. For determining if a Choose One election picked the right winner, polling data needs to be considered as well.

 

Ground Zero - Notorious Failed Elections

For assessing voting methods with less expressive ballots, pre-voting-day polling and exit polls can be a valuable addition to election results and ballot data. In many cases, ratings are used in this kind of polling because a rating is able to collect the kind of data needed to assess less expressive ballot data and election results. Despite these less expressive ballots, we can draw some firm conclusions from the data, polling, and other observations and trends.

For example, failed elections due to vote splitting and the Spoiler Effect can be glaringly obvious. The 2000 presidential election with George W. Bush (Republican) vs Al Gore (Democrat) and Ralph Nader (Green Party) is a classic example, even if we ignore the electoral college. In that election, a majority of voters were from the left end of the political spectrum. Based on polling we can safely conclude that many Green Party voters would have preferred Gore over Bush, and if we had a more expressive voting system, the election would have elected Gore. In 1992, the same scenario happened in the political reverse where the Republican George H. W. Bush was likely the candidate preferred overall, but he lost the election to Bill Clinton after voters on the right were split between George H. W. Bush and Ross Perot.

Among voting scientists, there is full consensus that Choose One Voting is wildly inaccurate with more than two candidates in the race. 

“The fact is that [Choose One Voting], the voting method we use in most of the English-speaking world, is absolutely horrible, and there is reason to believe that reforming it would substantially (though not of course completely) alleviate much political dysfunction and suffering.” - Jameson Quinn in “A Voting Theory Primer for Rationalists” (14)

Real world data is more insightful when we are looking at election results from voting methods that do use expressive ballots. For example, Ranked Choice Voting (15) uses an expressive ballot, but also tallies the votes in many elimination rounds that ignore most of the ballot data overall. When we go back and look at the full ballot data, sometimes we find Ranked Choice Voting (RCV) elections, like the 2022 Alaska Special General election (16), where the candidate who won wasn't actually preferred by the voters according to the ballots cast.

 

Condorcet Winners and Ranked Voting

The Condorcet Winner (17) is the candidate who was preferred over all others head-to-head, and a ranked ballot or any other ballot that shows preference order is all that is needed to find the Condorcet Winner if one exists. Thus, for ranked ballot elections, the Condorcet Winner is the best way to evaluate election results.

Unfortunately, Condorcet has its limitations as well:

  • First, there isn't always a single winner that was preferred over all others. Sometimes preferences are cyclical, like ​a rock-paper-scissors​ three way tie. (A>B, B>C, C>A.)
  • Second, Condorcet only looks at preference order, not level of support, so there are cases where the Condorcet winner wasn't actually the candidate with the most support. A candidate who is your second choice may be almost as good as your favorite, or they could be almost as bad as your last choice. Ranked ballots don't have the resolution to allow voters to make those distinctions.

Advocates of RCV often argue this point to defend the results of the 2009 Burlington VT, IRV election which failed to elect the candidate who was preferred over all others, but in order to make that argument convincingly, we would need to know more than just voters' preference orders: we would need to know how much each voter liked each candidate.

In the Burlington mayoral race, there were three viable candidates-- a Democrat, Republican, and a Progressive-- and all three had significant support. The Democrat was preferred over all the others (the Condorcet Winner) but came in third place after voters' first choice votes were counted in the first round. The Progressive won. This result was widely regarded as a failed election and the Ranked Choice system was repealed shortly thereafter.

Especially problematic was the fact that voters had been told several claims that were proven to be false in stark terms:

  1. Voters were told that if their first choice was eliminated, their next choice would be counted. In Burlington, the Republican voters' second choice wasn't counted because that candidate had already been eliminated.
  2. Voters were told it was safe to vote their conscience. In reality, these voters should have strategically ranked their second choice first, knowing that their first choice wasn't going to win. Voting lesser evil would have gotten them a better outcome.
  3. Voters were told RCV would elect the majority preferred winner. The Democrat, who was eliminated first, had a larger majority than the Progressive, who won.

Did RCV elect the wrong winner? Many Republican voters ranked the Democrat as their second choice, showing that they preferred the Democrat to the Progressive candidate. If these voters would have actually been significantly more satisfied if the Democrat had won, then the Condorcet winner should have won. On the other hand, if Republicans would have been almost equally dissatisfied with either the Democrat or the Progressive, then the Progressive was probably the candidate with the most support after all.

The point is that in these kinds of close three-way ties, it's critical to have expressive ballot data in order to determine if the candidate who won had the most support or not. An expressive ballot shows level of support, preference order, and allows voters to express "equal preference" if desired. In Burlington, the ballots clearly showed that the Democrat was preferred over all others. The Democrat was the Condorcet Winner and so, according to the ballots cast, he clearly deserved to win.

It's important to note that this issue doesn't always favor Progressives or third parties when it fails. RCV could just as easily elect a Republican where a Democrat was preferred, or any other candidate, but when Ranked Choice Voting fails, it tends to elect the most polarizing candidate from one of the two largest factions. Even in situations where the Condorcet Winner didn't deserve to win, these kinds of outcomes cast doubt on the legitimacy of the winner, as well as the method itself.

To learn more about Burlington, read more from Equal Vote here (18), from The Center for Election Science here (19), and from the Center for Range Voting here (20).

Was this election a fluke, or did it represent a serious flaw in the system?

 

Statistical Analysis of Election Accuracy

In order to answer that question, electoral scientists, mathematicians, and political scientists turned to statistical analysis of simulated elections. Beginning with the work of Weber in 1978 and Merrill in 1984, the quest to answer this question launched a new era in the science of comparing voting methods.

Merrill, Samuel (1984). "A Comparison of Efficiency of Multicandidate Electoral Systems" (21). American Journal of Political Science. Note that Ranked Choice Voting is labeled as Hare.

One such paper "Frequency of monotonicity failure under Instant Runoff Voting: Estimates based on a spatial model of elections" (22) begins by stating:

"It has long been recognized that [Ranked Choice Voting (RCV)] suffers from a defect known as nonmonotonicity, wherein increasing support for a candidate among a subset of voters may adversely affect that candidate’s election outcome. The expected frequency of this type of behavior, however, remains an open and important question, and limited access to detailed election data makes it difficult to resolve empirically. In this paper, we develop a spatial model of voting behavior to approach the question theoretically. We conclude that monotonicity failures in three-candidate [RCV] elections may be much more prevalent than widely presumed (results suggest a lower bound estimate of 15% for competitive elections)."

This study, from Dr. Joseph T Ornstein of the University of Michigan and Dr. Robert Z. Norman of Dartmouth College came out in 2013. These results were seen by many as a red flag, but for the researchers who had been pioneering the work of bringing advances in computer simulations and statistics into the field, these findings only confirmed warnings that had been predicted long before.

 

Bayesian Regret

In 2000, Dr. Warren Smith, PhD, of the Center for Range Voting, built on the work of Merrill (21) and Bordley (23) and applied the game theory concept Bayesian Regret (24) to voting theory, breaking new ground by varying election methods, voter utility models, and strategy models systematically. The chart below, which predates the invention of STAR Voting by over a decade, showed that Score Voting when combined with a Top-Two general election topped the charts, even if some voters were strategic. STAR Voting (Score Then Automatic Runoff) is essentially this method, but with a single election rather than a separate primary and general.

Note that Ranked Choice (Instant Runoff) Voting, the method used in Australia and in some parts of the United States came in 42nd place, and the ubiquitous Choose One (Plurality) Voting method came in dead last at 50th.

Simulations from the Center for Range Voting assessing frequency of Bayesian Regret (lower is better) and frequency of Condorcet winners (higher is better)


These statistics foreshadowed the next revelation in voting theory, and in 2014, STAR Voting was invented (25) at the Equal Vote Conference at the University of Oregon.

The Equal Vote Conference, like most events on voting reform, featured presentations from advocates of both the Ranked Choice Voting and the Score/Approval voting camps. The two camps (one favoring ordinal (ranking) methods and the other favoring cardinal (scoring) methods) Cardinal or scored methods, and the other favoring Ordinal or ranked methods) have long been at odds, with both sides citing details of the other's proposals as deal breakers.

STAR Voting combines the two approaches. The realization was that a scoring ballot includes both level of support and preference order, which means that it could be counted both ways — with a scoring round and then an pairwise comparison (or "automatic runoff"). This hybrid approach unlocks the simplicity and benefits of tabulation by addition, while also achieving the honest voting incentives which are gained by a preference ballot and top-two runoff.

The theory was that STAR Voting may offer a compromise that could outperform both Approval and RCV, addressing major criticisms of both, and if so, that the new method may have the power to unite the fractured reform movement.

 

The Ka-Ping Yee Simulations

In 2005, a young researcher named Ka-Ping Yee, who was completing his PhD in Computer Science from UC Berkeley, had introduced a novel way to examine the behavior of single-winner election methods. Yee Diagrams (2), as they're now widely known, show candidates and voter blocks on a 2 dimensional political space. Yee open sourced all his code, and many other researchers since have been able to build on his work (26).

These kind of visualizations are useful in that you can see exactly how ideologically close or far each voter is from each candidate. The color of the background represents which candidate would win under each method if a randomized electorate in a normal distribution, centered at that point, were to vote. For example, in the Choose One (Plurality) diagram, even if the electorate was centered right next to the green candidate they would lose. In Approval, the results look fair, and in the RCV (Hare) chart, you can see extreme distortions where if the center of public opinion is close to green, the winner looks almost random. Yee Diagrams represent a simplification of our complex political spectrum, but do a good job at illustrating common phenomena that can affect election outcomes. 

Yee's diagrams illustrated some serious pathologies with the Choose One and Ranked Choice methods, but didn't include STAR Voting, which hadn't yet been invented at the time. In 2017, Mark Frohnmayer, using Yee's code, created a video called "Animated Voting Methods" (27), which adds Score Voting and STAR Voting, as well as a one-voter "ideal winner" model for comparison. These models showed that where Choose One and Ranked Choice tend to squeeze out candidates in the center (i.e. favoring more polarizing candidates), Score and Approval may give an advantage to candidates who are positioned in between others, though to a lesser extent. STAR Voting consistently performs closest to the ideal model of the systems visualized.

These findings corroborated those from Warren Smith in his paper "Pro-Extremist versus Pro-Centrist bias in Voting Methods" (28). This point often brings up a philosophical debate, with many advocates for RCV, especially those from the fringes of the political spectrum, arguing that this is a feature, not a bug. On the other side, many advocates of Approval Voting consider a slight centrist bias to be an advantage that could translate to more homogenized legislatures who may be less stagnated by infighting and thus could be more effective and efficient.

These voting reform advocates fall for the common trap of preferring the system that they believe will give them an advantage, but they miss a key point: these biases depend on the position of the candidates relative to each other, not the voters, and may not correlate at all to the right-left political spectrum. A bias that favors the far left is just as likely to favor the far right in a red district, or even a centrist in a deep blue district. Furthermore, these flaws can be exploited by strategically nominating candidates, much like intentionally running a spoiler is used today to change election outcomes.

Of course, at the Equal Vote Coalition, we prefer unbiased, accurate, and representative elections. Voting methods in this category according to the analysis we've seen include STAR Voting, the Condorcet methods (like Ranked Robin), and Score Voting or Approval Voting if combined with a Top-Two runoff election.

 

Voter Satisfaction Efficiency

One of the most cutting edge tools for measuring election accuracy is Voter Satisfaction Efficiency (VSE), which came out in 2017 (5) and was further refined through peer review in 2023 (6). The work of Dr. Jameson Quinn, who in 2017 was completing his PhD from Harvard in Statistics and was Vice Chair at the Center for Election Science, VSE analyzes voting methods using thousands of simulated elections across a wide variety of scenarios. Factors and variables like strategic voters, voter blocks who cluster on issues, number of candidates, polarization in the electorate, and more, are considered to help us determine when and how often an election system elects the best candidate. In VSE, the candidate who should win is defined as the "candidate who would make as many voters as possible as satisfied as possible with the election outcome." 

Voter Satisfaction Efficiency makes a strong case for STAR Voting. In Quinn's 2017 VSE analysis, STAR topped the charts, coming in as more accurate than all other voting systems that are being seriously advocated for, many of them by large margins. The only voting method that was close to on par was a Condorcet Method called Ranked Pairs (29), which had previously set the bar for accuracy but is considered too complex for public elections. 

Here are some of the findings that we can extrapolate from the VSE graphs:

  • STAR is among the very best of the best. When voters are honest, STAR delivers its best results with a VSE of over 98%. Under less than ideal circumstances, such as elections where a large portion of voters are strategic, STAR was still highly accurate with a VSE of over 91%.

  • STAR Voting at worst was basically just as accurate as the best-case-scenario for IRV (commonly referred to as Ranked Choice Voting) and was much better than Plurality Voting (our current system) in every scenario. For comparison, IRV elects the correct winner 80-91% of the time, and Plurality voting only delivers correct outcomes 71-86% of the time.

  • STAR showed a high resiliency to strategic voting, with results closely clustered regardless of voter strategy. This means that tactical voting has a much smaller impact on overall election accuracy compared to other systems. Even if many people try to "game the system," the election will still come out in relatively good shape. In this category 3-2-1 Voting (30) is another method which did particularly well.

  • Though STAR does well even if voters are strategic, VSE strategy simulations showed that STAR doesn't incentivize strategic voting. While no voting method can eliminate all opportunities for strategic voting (31), strategic and dishonest voting is just as likely to backfire as it is to help the individual voter in STAR. In contrast, strategic voting under Instant Runoff Voting was found to be incentivized almost three times as often as it backfired. 

The VSE chart below shows how often strategic voting works compared to how often it backfires. Better results are higher and further to the left. This chart shows how extremely dependent on strategic voting Plurality voting is, with a ratio of 17:1, by far the worst out of all voting methods studied.

Of the methods which are the subject of active campaigns in the U.S., STAR Voting boasts a 1:1 ratio, indicating that strategic voting will not give voters an edge. Ideal Approval comes next with 1:2.6, then comes IRV (Ranked Choice) with 1:2.7, and then Score Voting with a ratio of 1:3. Note that Score voting (which gets considerable criticism for being "gameable") is only slightly worse than IRV, and still over 5 times better than the current system on this metric.

You can learn more about Voter Satisfaction Efficiency here

 

Accuracy and Other Key Considerations

Advances in voting theory have given the modern era of voting reform a huge advantage compared to the reformers of yesteryear. While a few of the older methods stand up to the test of time (including Condorcet and Score Voting) and do deliver outcomes significantly better than the ubiquitous Choose One Voting, modern simulations have revealed serious flaws such as unrepresentative outcomes in methods like Ranked Choice Voting, which is the subject of reform efforts around the world. Simulations have allowed for high level analysis of strategic incentives in ways that were not previously possible.

While "Accuracy" is of course one of the most important considerations in the quest to find the best voting method, and it makes a strong case for STAR Voting, there are other factors that absolutely warrant consideration as well. At the Equal Vote Coalition (32), a nonpartisan nonprofit focused on voting reform, we've identified five overarching pillars that need to be maximized for better voting, healthy political discourse, and fair representation: Honesty, Equality, Accuracy, Expressiveness, and Simplicity (33).

 

Honest: Safe to vote your conscience; strategic voting is not incentivized (34).

Equal: Ensures an equally-weighted vote (35) for all. Eliminates vote-splitting. Doesn't give anyone an unfair an advantage. See Center Squeeze (36), Center Expansion (37), and Electability Biases (38).

Accurate: Winners are representative and accurately reflect the will of the people. Election accuracy is assessed using a variety of metrics including Voter Satisfaction Efficiency.

Expressive: Voters are able to express their full nuanced opinion.

Simple: Easy to understand, easy to tabulate, easy to implement, easy to audit.

 


Sources:

1.) Dr. Smith, Warren. "Range voting with mixtures of honest and strategic voters." Center for Range Voting, 2003. An analysis of the accuracy of over 50 different voting methods using a mix of honest and strategic voters in simulations.

2.) Yee, Ka-Ping. "Voting Simulation Visualizations." 2005. Novel visualizations of the effects different voting methods have on outcomes based on candidate distribution in a 2-dimensional space. Particularly useful for demonstrating center-squeeze and center-expansion.

3.) "Yee diagram." electowiki.

4.) Frohnmayer, Mark. "Animated Voting Methods." 16 June 2017. Explanation of animations of Yee Diagrams, updated to include STAR Voting.

5.) Quinn, Jameson. "Voter Satisfaction Efficiency Simulator." Center for Election Science, 2017. The first publication of VSE, an upgrade to Smith's Bayesian Regret models and the first simulations to include STAR Voting.

6.) Wolk, Sara, et al. "STAR Voting, equality of voice, and voter satisfaction: considerations for voting method reform." Constitutional Political Economy, vol. 34, issue 3, 20 March 2023, pp. 310-334. Dr. Jameson Quinn upgrades his VSE models with Marcus Ogren's strategy models and Marcus introduces Pivotal Voter Strategic Incentive to measure the strength of incentives for different strategies for voters under different methods.

7.) Huang, John. "Multi-dimensional Spatial Voting Simulations." 25 May 2020. Analysis showing that the relative conclusions of 2-dimensional simulations hold and the number of dimensions increases.

8.) Huang, John. "Strategic Voter Simulations." 2 February 2021. Simulations demonstrating that voting methods using pairwise comparisons are robust against voter strategy.

9.) Gallets, David. "STAR vs IRV vs FPTP Simulations." Simulations demonstrating how often different methods fail to elect the candidate closest to the center of public opinion as the number of candidates increases.

10.) Psephomancy"STAR voting vs other systems on a 2D political compass." 2 November 2018. Visually shows how different voting methods behave under different clustering of voters.

11.) Essenzia. "Yee Diagramm - Strong monotonicity failure resistance." 7 September 2020. Yee diagrams demonstrating nonmonotonicity in various voting methods.

12.) Darlington, Richard B. "Are Condorcet and Minimax Voting Systems the Best?" ArXiv, 2021. Results of Darlington's simulations of various voting methods in comparison to Minimax.

13.) "Arrow's impossibility theorem." Wikipedia article describing the theorem that states that no ordinal method can pass Independence of Irrelevant Alternatives.

14.) Dr. Quinn, Jameson. "A voting theory primer for rationalists." Less Wrong, 12 April 2018. Jameson pulls on his decades of experience to present a starting guide for voting enthusiasts.

15.) "Instant-runoff voting." Wikipedia article on Ranked Choice Voting.

16.) Peter, Arend. "RCV Changed Alaska." Equal Vote Coalition, 14 April 2024. Visual tool demonstrating multiple failures in Alaska'a 2022 Special General election and others across the US.

17.) "Condorcet winner criterion." Wikipedia article explaining beats-all winners.

18.) Frohnmayer, Mark. "What the heck happened in Burlington?" Equal Vote Coalition, 12 March 2017. Frohnmayer breaks down the numbers in the 2009 Burlington, VT mayoral RCV election and speculates how the outcome may have differed if they used STAR Voting.

19.) Hamlin, Aaron. "The Spoiler Effect." Center for Election Science. Hamlin explains the Spoiler Effect and why RCV doesn't solve it. (Note: After Hamlin left the Center for Election, the organization chose to remove this page from their site.)

20.) Dr. Smith, Warren. "Burlington Vermont 2009 IRV mayor election: Thwarted-majority, non-monotonicity & other failures (oops)" Center for Range Voting, March 2009. Smith breaks down the Burlington election in detail.

21.) Merrill, Samuel. "A Comparison of Efficiency of Multicandidate Electoral Systems." American Journal of Political Science, vol. 28, no. 1, pp. 23-48, February 1984. Merrill improves on Bordley's (1983) work and runs computer simulations of more voting methods to quantify their accuracy.

22.) Ornstein, Joseph T and Robert Z. Norman. "Frequency of monotonicity failure under Instant Runoff Voting: Estimates based on a spatial model of elections." Public Choice, vol. 161, 1 October 2013. Ornstein and Norman use spatial models to estimate the frequency of monotonicity failures in competitive elections.

23.) Bordly, Robert F. "A Pragmatic Method for Evaluating Election Schemes through Simulation." American Political Science Review, vol. 77 , iss. 1, March 1983 , pp. 123-141. Bordley runs the first-ever computer simulations to quantify the accuracy of different voting methods.

24.) Dr. Smith, Warren. "Bayesian Regret for dummies." Center for Range Voting. Smith explains what Bayesian Regret is and why it's useful for measuring the accuracy of voting methods.

25.) Frohnmayer, Mark. "Demystifying STAR Voting." Medium, 17 March 2021. Frohnmayer, the inventor of STAR Voting, reviews the invention of the method, including its necessity for correcting problems inherit in other methods like RCV.

26.) Case, Nicky. "To Build a Better Ballot: an interactive guide to alternative voting systems." December 2016. Case gives you control over interactive, simplified Yee Diagrams to experience how different voting methods affect outcomes.

27.)  See citation (4)

28.) Dr. Smith, Warren. "Pro-Extremist versus Pro-Centrist bias in Voting Methods." Center for Range Voting. A basic set of Yee Diagrams demostrating center-squeeze and center-expansion in different voting methods.

29.) "Ranked pairs." Wikipedia article explaining Ranked Pairs.

30.) "3-2-1 voting." electowiki article explains 3-2-1 Voting.

31.) "Gibbard's theorem." Wikipedia article explaining the theorem that all voting methods incentivize dishonest strategic voting for some voters in certain situations.

32.) "Equal Vote Coalition." Equal Vote Coalition. Home page.

33.) Wolk, Sara. "Criteria: Evaluating voting systems and the criteria we judge them by" STAR Voting Action. Wolk explains why voting methods should be judged based on criteria that are not pass/fail.

34.) Frohmayer, Mark. "Strategic Voting with STAR?" The Equal Vote Coalition. Frohnmayer review the claim that STAR Voting incentivizes strategic voting.

35.) Wolk, Sara. "What is an Equal Vote?" The Equal Vote Coalition. Wolk goes through the definition and implications of One Person, One Vote.

35.) "Center squeeze." electowiki article explaining the center-squeeze effect.

36.) "Yee diagram." electowiki article explaining Yee Diagrams.

38.) Wolk, Sara. "Could STAR Voting slay the 'electability' dragon?" Medium, 30 March 2020.