Complete Document - Parties

Written by Bret Barrowman, Senior Specialist for Research and Evaluation, Evidence and Learning Practice at the International Republican Institute, and Amy Studdart, Senior Advisor for Digital Democracy at the International Republican Institute

Conceptual Framework

Even in relatively democratic, competitive political party environments, two related dilemmas make countering disinformation difficult. First, competitive parties face a "tragedy of the commons” with respect to disinformation, in which a healthy information environment leads to the best social outcomes, but also incentivizes individual actors to gain a marginal electoral advantage by muddying the waters. Second, parties are not unitary, but are collections of distinct candidates, members, supporters, or associated interest groups, each with its own interests or incentives. In this case, even when party organizations are committed to information integrity, they face a “principal-agent” dilemma in monitoring and sanctioning co-partisans. These related dilemmas create an incentive for political parties and candidates to avoid engaging in or implementing programmatic responses. Democracy, human rights, and governance (DRG) funders and implementing partners can mitigate these dilemmas by using networking and convening power to help parties maintain commitments to information integrity, within and between parties.

Programmatic Responses

DRG practitioners have implemented a wide range of programmatic approaches to reduce both the impact and use of disinformation and related tactics by political parties during elections. These approaches are summarized in the table below, according to the “core party function(s)” – the functions that parties perform in an ideal-type democratic party system – upon which the program approach might be expected to operate. This typology is intended to provide DRG practitioners with a tool through which to analyze party systems and programmatic approaches, with the goal of designing programs that are tailored to the challenges of political party partners.

Program Approaches Core Party Functions
  Interest Articulation (expressing citizen interests through electoral campaigns or implementation of policy) Interest Aggregation (bundling many disparate, and occasionally conflicting, citizen interests into a single branded policy package or platform) Mobilization (activating citizens, usually party supporters, for political engagement, including attending rallies or events, taking discrete actions like signing petitions or contacting representatives, and especially voting. Persuasion (parties’ or candidates’ attempt to change voters’, undecided voters or opposition supporters, opinions on candidates or policy issues.
Programs on Digital Media Literacy * * *  
Programs on AI and Disinformation     * *
Programs for Closed Online Spaces and Messaging Apps   * *  
Programs on Data Harvesting, Ad Tech & Microtargeting   * * *
Programs on Disinformation Content and Tactics * *    
Research Programs on Disinformation Vulnerability and Resilience * *    
Programs for Understanding the Spread of Disinformation Online     * *
Programs Combating Hate Speech, Incitement, and Polarization   * *  
Policy Recommendations and Reform/ Sharing and Scaling Good Practice in Programmatic Responses * * * *

 

Recommendations

  • When implementing these programmatic approaches, consider political incentives in addition to technical solutions.
  • Programmatic interventions should account for diverging interests within parties – parties are composed of functionaries, elected officials, interest groups, formal members, supporters, and voters – each of which may have unique incentives to propagate or take advantage of disinformation. 
  • The collective action problem of disinformation makes one-off interactions with single partners difficult – consider implementing technical programs with regular, ongoing interaction between all relevant parties to increase confidence that competitors are not “cheating.”
  • Relatedly, use the convening power of donors or implementing organizations to bring relevant actors to the table. 
  • Consider pacts or pledges, especially in pre-election periods, in which all major parties commit to mitigating disinformation. Importantly, the agreement itself is cheap talk, but pay careful attention to design of institutions, both within the pact and externally, to monitoring compliance.
  • There is limited evidence for effectiveness of common counter-disinformation program approaches with a focus on political parties and political competition, including media literacy, fact-checking, and content labeling. That there is limited evidence does not necessarily imply these programs do not work, only that DRG funders and implementing partners should invest in the rigorous evaluation of these programs to determine their impact on key outcomes like political knowledge, attitudes and believes, polarization, propensity to engage in hate speech or harassment, and political behavior like voting, and to identify what design elements distinguish effective programs from ineffective ones. 
  • DRG program responses have tended to lag political parties’ use of sophisticated technologies like data harvesting, microtargeting, deep fakes and AI generated content. Funders and implementing partners should consider the use of innovation funds to generate concepts for responses to mitigate the potentially harmful effects of these tools, and to rigorously evaluate impact. 

Definition of Political Parties

Political parties are organized groups of individuals with similar political ideas or interests who try to make policy by getting candidates elected to office.1 This electoral function – advancing candidates for office and securing votes for those candidates – distinguishes political parties from other organizations, including civil society organizations (CSOs) or interest groups. This electoral role creates unique incentives for political party actors with respect to disinformation and programmatic responses. 

Political Parties, Information, and Democracy: An Overview for Developing Context Analysis, Problem Statements, and Theories of Change

 

How Parties Connect Citizens with their Representatives

The ability of party systems to constructively shape electoral competition depends on the exchange of high-quality information. Conceptually, parties connect citizens to elected officials through a market mechanism. In democratic multiparty systems, political parties bundle many disparate, and occasionally conflicting, interests into a single branded package (interest aggregation) which they in turn “sell” to voters during elections (interest articulation).2 Importantly, however, this process represents an ideal model of democratic competition between programmatic political parties that political scientists expect to produce the best democratic outcomes for citizens, including high quality public goods and services, and high levels of accountability. However, no single party or party system approximates this model in practice, and many fall short of it. 

Indeed, in many cases, parties fail to effectively aggregate or articulate citizen preferences. Disinformation, creating fractured, isolated epistemic communities, clearly makes the processes of interest aggregation and articulation more difficult, although it is ultimately unclear whether disinformation is a cause or consequence. For these processes to operate effectively, political parties and elected officials must have good information about the preferences of their constituents, and voters must have good information about the performance of their representatives. Party brands facilitate this accountability by providing a yardstick for voters; citizens can judge their representatives against what their party brand promises. These processes are particularly important for political inclusion. Clear information about constituent preferences and representatives’ performance improves the likelihood that the interests of marginalized groups are heard and perceived as legitimate, and as such, provides an electoral incentive for political leaders to address those interests.  This transmission of information between elites and voters is a necessary (but not sufficient condition) for democratic party systems to function. Without good information, parties and elected officials cannot ascertain constituent preferences, and voters cannot associate performance or policy outcomes with a party brand to hold elected officials accountable. Furthermore, disinformation can influence whose voices are heard and what interests are legitimate. As such, political elites may have an incentive to use disinformation to further marginalize under-represented groups.

Excludability and Attribution: Why it is hard for citizens to hold representatives accountable for public policies without functioning parties and good information.

However, this problem of the exchange of good information is compounded by the nature of public policies. In economic terms, public goods and services are non-excludable – it is difficult to prevent individual citizens from enjoying them if they are provided. For example, a good national defense establishment protects all citizens, even those who have not paid their taxes; it is not practical or cost effective for a state to withhold national defense from specific citizens.  Private goods -- money in exchange for a vote, for example --  can be delivered directly to specific individuals, who know exactly who provided it. 

Public policies, on the other hand, suffer from a problem of attribution. Since these goods are provided collectively, citizens may be less sure what specific officials or parties are responsible for them (and conversely, who is responsible for unintended consequences or the lack of policy altogether). Also, public policies are complicated. Both these policies, and their observable outcomes for citizens, are the products of complex interactions of interests, context, policymaking processes, and implementation. Furthermore, observable outcomes, like a good economy or a healthy population, may significantly lag the policies that are most directly responsible for them. As such, citizens may find it difficult to attribute policy outcomes to specific representatives.3 Political parties can help simplify complex policy issues for voters, again assuming an exchange of good information between elites and voters. 

The Tragedy of the Information Commons: Accounting for Incentives in Countering Disinformation Programs

These interrelated concepts – the interest aggregation and articulation functions of parties, role of information in democratic political competition, and the attribution problems of public policies have important implications for the design and implementation of counter-disinformation programs. Like national defense or a functioning transportation infrastructure, a healthy information environment benefits everyone, and it is impractical to exclude individuals or single groups from that benefit. For parties, this nature of the information environment creates a collective action or “free-rider” problem.4 While the best collective outcomes occur when all actors refrain from engaging in disinformation, each individual has an incentive to “free-ride” – to enjoy the healthy information environment while gaining a marginal competitive advantage by muddying the waters. In this sense, the problem of disinformation for political parties is a tragedy of the commons,5 in which small transgressions by multiple actors end up spoiling the information environment.6 This can occur even in ideal circumstances – relatively open environments with competitive elections. It is compounded in authoritarian or semi-authoritarian systems in which the incumbent exercises significant control over the information environment through repression or control of media outlets, or where fringe parties or politicians have an incentive to proliferate provocative content with the goal of increased attention or visibility.7 This control of the information environment precludes meaningful electoral competition between parties, further reducing any incentive to cooperate on information integrity. While this situation may create incentives for opposition parties to counter disinformation, especially if they see gains from public perceptions of honesty, it may also lead to vicious cycles of degrading the information environment when there are alterations of power. 

Like other public goods and services, a good information environment benefits everyone. Citizens get accurate information about how their representatives are doing and can reward or sanction them accordingly. Parties get good information about what their citizens want. A good information environment depends on every actor committing to this outcome. In fact, parties have an electoral incentive to muddy the waters – to let every other competitive party be honest while they misrepresent issues of public policy. Again, this dilemma makes countering disinformation difficult even in the best-case scenario. Where parties and party systems fall short of this ideal type, the dilemma will be more difficult to resolve. 

Highlight


Principal-Agent Problem: An organizational problem in which one actor (the principal) has authority to set collective goals and must ensure that one or more other actors (the agents) behave in a way that advances those goals, despite the agents controlling information about their own performance. For an illustration of the principal-agent problem in campaign messaging, see Enos, Ryan D., and Eitan D. Hersh. “Party Activists as Campaign Advertisers: The Ground Campaign as a Principal-Agent Problem.”

American Political Science Review 109, no. 2 (May 2015): 252–78. https://doi.org/10.1017/S0003055415000064.

The Principal-Agent Problem of Political Parties: Maintaining commitments to countering disinformation within parties

Furthermore, political parties are not unitary; they are coalitions of varied (and often competing) candidates, constituencies, and interest groups. As such, all political parties face an additional challenge of keeping candidates and members accountable to the parties’ organizational goals and platform.  In the context of disinformation, even democratically inclined or reform parties, or parties that think they can gain votes by taking a stand against disinformation, confront a principal-agent problem. On the one hand, party leaders may simply be unaware of affiliates’ attempts to generate or take advantage of disinformation. On the other hand, this problem creates plausible deniability – elites may tacitly encourage supporters to engage in disinformation to help the parties’ electoral prospects while the leadership signals a commitment to information integrity. In addition, often, individual party members exploit gender or other identity-based cleavages of “competitors” within their own party to gain a competitive edge, that can include the use of hate speech, disinformation or other harmful forms of content promoted in the public sphere. If this dynamic is unacknowledged, DRG programming can help legitimize campaign tactics that undermine democratic accountability. In short, DRG practitioners should not assume political parties are unitary, and technical solutions should include approaches to helping political party actors ensure that all candidates and supporters maintain commitments to information integrity. While these models help illustrate important incentives that program designers should be sensitive to, it is important to note that they do not preclude technical solutions. Beyond providing encouragement, support, and training for party leadership in setting tone and expectations, establishing infrastructure for communication and coordination within the party will hold members and candidates accountable. The “DRG Program Responses to Disinformation with Political Party Partners” section below provides concrete ideas for programs to support parties’ efforts to protect information integrity.

Party Functions, Incentives for Abuse, and Program Design

In concrete terms, political parties perform four information-based functions in democratic multi-party systems: interest aggregation, interest articulation, citizen mobilization, and persuasion. Democratic collective outcomes are more likely when parties perform these functions based on good information. However, within each function, parties or individual candidates have incentives to manipulate information to gain an electoral advantage.

Interest aggregation refers to parties’ capacity to solicit information about citizen interests and preferences. To develop responsive policies and compete in elections, parties must have reliable information about the interests and preferences of the voters.  They may also have an incentive to mischaracterize public sentiment both to their opponents and the public. For instance, to prioritize a policy that is broadly unpopular, but which is important for a key specific constituency, a party or candidate might have an incentive to mischaracterize a public opinion study, or to artificially amplify support for a policy on social media using bot networks. 

Interest articulation refers to parties’ ability to promote ideas, platforms, and policies, both in the campaign and policymaking process. Interest articulation requires political parties to engage in both mass and targeted communication on issues with voters. This function may also require parties and candidates to persuade (see below) citizens of their viewpoints – particularly to convince voters that specific policies will fulfill those voters’ interests. Again, there is a social benefit to “true” information about the policies and positions – citizens can cast their votes for the parties that best represent their preferences. However, to gain an electoral advantage, individual parties or candidates may have an interest in misrepresenting their policy positions or the potential consequences of preferred policies, by fabricating research studies or by scapegoating vulnerable groups. 

Mobilization refers to parties’ capacity to activate citizens for political engagement, including attending rallies or events, taking discrete actions like signing petitions or contacting representatives, and especially voting. To produce the most democratic outcomes, mobilization should be based on good information; parties should provide potential voters with accurate information about policies and the electoral process – particularly where and how to vote.  However, individual parties and candidates can gain an electoral advantage by engaging in more nefarious mobilization tactics. Mobilization can involve coercion – the use of disinformation to “scare” voters about the consequences of opponents’ policies, to activate voters by inflaming prejudices or political cleavages, or to demobilize opposition candidates or supporters through harassment or by generating apathy. 

Persuasion refers to parties’ or candidates’ attempt to change voters’ opinions on candidates or policy issues. In contrast to mobilization, which often focuses on known party supporters or apathetic voters, persuasion is usually targeted to moderates, “undecideds” or weak supporters of opposing parties. 

Highlight


Key Agents: Trolls, bots, fake news websites, conspiracy theorists, politicians, partisan media outlets, mainstream media outlets, and foreign governments9

Key Messages10: causing offense, affective polarization, racism/sexism/misogyny, “social proof” (artificial inflation of indicators that a belief is widely held), harassment, deterrence (use of harassment or intimidation to discourage an actor from taking an action, like running for office or advocating for a policy), entertainment, conspiracy theories, fomenting fear or anxiety of a preferred in-group, logical fallacies, misrepresentations of public policy, factually false statements 

Key Interpreters: Citizens, party members and supporters, elected officials, members of the media

Key Resource


Notes on Sources of Disinformation:

The information disorder framework suggests identifying disinformation tactics and possible responses by thinking systematically about the agent, the message, and the interpreter of the information. Many DRG programs, especially funded by USG donors, focus on building resilience within a target country to foreign disinformation campaigns, especially by the governments (or pro-government supporters) of China, Russia, and Iran. However, it is important to note, especially among political parties, that the agents (or perpetrators) of disinformation campaigns may also be domestic actors seeking to affect the behavior of their political opposition or supporters. Even in the case of foreign-directed campaigns, interpreters (or targets) are not selected broadly or arbitrarily. Rather, foreign campaigns seek to exacerbate existing social and political cleavages, with the goal of eroding trust in institutions writ large. Furthermore, foreign campaigns often rely on witting or unwitting supporters in target countries.  In both cases of foreign and domestic disinformation campaigns, therefore, historically marginalized groups including women, ethnic, religious, or linguistic minorities, persons with disabilities, and LGBTI individuals are often disproportionately targeted and injured by these efforts.8

Implications

These interrelated concepts – the excludability and attribution problems of public policies, the concept of the information space as a tragedy of the commons, and the role of information in the interest aggregation, articulation, and mobilization functions of political parties have several concrete implications for practitioners designing and implementing counter-disinformation programs:

  1. The best collective democratic outcomes require a healthy information environment.
  2. Each individual actor (a party or candidate) may perceive an incentive to let others behave honestly while they try to gain a competitive advantage through disinformation.
  3. As such, political parties have an incentive both to perpetuate and to mitigate disinformation. Whether they choose to perpetuate or mitigate depends on the context; in short, parties want voters to have true information about things that help them and false information about things that hurt them. The inverse is true for their political opponents. 
  4. Information disorders are a product of many actors perceiving this incentive structure and knowing their competition is acting according to these incentives. Parties might be willing to commit to information integrity if they could be confident their competitors would do the same simultaneously. However, if they are not confident their opponents will do the same, even honest or democratically inclined parties may be unwilling or unable to forgo using disinformation if it means losing elections and their opportunity to implement their agenda
  5. When political parties ARE committed to information integrity, they are not unitary, and face an additional challenge of keeping candidates, members, and supporters accountable to the parties’ commitments. This unwillingness or inability to monitor and sanction co-partisans compounds the dilemma in point four above.  
  6. Consider these political incentives before designing and implementing technical solutions. Frameworks like Thinking and Working Politically and Applied Political Economy Analysis can help practitioners better understand the unique political incentives facing potential partners and beneficiaries. Key political solutions may include internal and external monitoring and coordination between relevant parties in committing to mitigating disinformation. 

A note on technical solutions: In some cases, particularly with democratic or reform parties, parties may express a willingness to take concrete steps to mitigate disinformation. In a smaller set of cases, all major competitive parties might be willing to take these steps.  Even these best case scenarios where parties are engaged in building solutions give rise to problems surrounding technical solutions, including resource constraints and technical capacity. However, technical solutions are necessarily secondary to more fundamental political solutions. 

Over the last few years, DRG practitioners have implemented a wide range of programmatic approaches to reduce both the impact and use of disinformation and related tactics during elections. Most DRG programmatic approaches look at the overarching information ecosystem, which has incidental impacts for political party behavior and the impact of disinformation as a campaign tactic. However, the last few years have seen an increasing number of interventions targeted specifically at political parties. These programmatic approaches operate on a wide variety of theories of changes, with various implicit or explicit assumptions about incentive structures for political parties, voters, and other electoral information actors. Keeping in mind the relevant functions of political parties, and the potential benefits to disinformation agents, including domestic actors, the following section outlinesbroad programmatic approaches that have been applied to assist political party partners in building resilience to disinformation. Generally, DRG programs tend to operate from a similar coherent logic – that if partners and/or their voters can identify disinformation and have the technical capacity to deter or respond to it, they will improve the information environment, leading to more responsiveness and accountability. 

Highlight


One promising approach to media literacy, for example, was partnership with state educational institutions to implement media literacy at scale. The IREX Learn2Discern campaign implemented media literacy trainings through community centers, schools, and libraries. Rigorous evaluations of the Learn2Discern program in Ukraine have found that both youth and adult learners were significantly more likely to be able to identify false news stories from true news, and that short media literacy videos and source labels mitigated the impact of Russian propaganda content

Digital Media Literacy Programs

The European Union’s Joint Research Commission definition on digital and information literacy through its Digital Competence Framework is key to understanding the effects of digital literacy programs. Their definition of digital competency is a Venn diagram of intersecting literacies including media, information, internet and ICT, which touches on different aspects of digital competency, from using the internet and understanding information in the abstract, to using ICTs in terms of hardware and software and the media in different forms. All these literacies are important in understanding how programs can address disinformation vulnerability and resilience. 

One approach to election related disinformation is to increase public awareness of the what, where, why and how of disinformation. Education campaigns vary in scope - both who they reach and what they address - and can be run by several different actors, including CSOs, schools, faith-based institutions, technology companies, and governments. The theory of change is that if the electorate is aware of the presence of disinformation and the ways in which it operates, then they will be more critical of the information they encounter and that this will then have less of an impact on their political views. Broadly, this approach is among the likeliest to have positive ramifications outside of election integrity and can – where implemented at scale – reduce the impact of health misinformation and susceptibility to cybercrime. 

In this sense, media literacy programs can operate on the “interest aggregation” and “mobilization” functions of political parties by mitigating the impact of disinformation on polarization, especially among strong partisans.

AI and Disinformation Programs

This program approach includes assistance to political party partners in responding to the use of a range of artificial intelligence (AI) applications, including automated artificial amplification, deep fakes, and the manipulation and modification of audio and videos. Increasingly these approaches encompass the use of large networks of automated accounts with more intelligently informed content, shaped by user responses, personal data and other metrics.

Efforts to combat disinformation have, as of writing, largely focused on the human-led creation of misleading content; false amplification using fake accounts, paid followers, and automated bots; and paid promotion of misleading content microtargeted at users based on their probable susceptibility to a given narrative. This focus has mirrored the widespread accessibility and scalability of the technologies underpinning disinformation – bots, content farms, fake followers, and microtargeted ads that have radically changed the way election and political information is created and distributed. For political parties, these technologies facilitate the social nature of their core functions, particularly by artificially signaling “social proof” – that a proposed policy or candidate is more widely supported than it is. These technologies also game trending and recommendations algorithms, increasing exposure for information (or disinformation) that otherwise may not have been widely available.11 Artificial amplification, therefore, helps parties and candidates manipulate citizen beliefs, rather than responding to constituent interests directly. While we do not expect that there will be any major shifts in the vectors for election-related disinformation, the DRG community is increasingly concerned about the further automation of content creation and distribution and the ease of access to “deep fake” manipulation, in which a video or audio is created of a person saying or doing something they never said or did.  

Technological approaches have been developed to identify areas where image and audio have been altered by detecting anomalies in pixelation or audio waves. At present, those are not deployed on a systematic basis. Given that deep fakes and AI-generated content have not yet begun to play a major role in campaigns or election integrity, case studies for DRG programmatic interventions have been limited. 

Although DRG programmatic responses have been limited, research organizations have established knowledge repositories on problems of computational propaganda. For example, the Oxford Internet Institute, through the Program on Computational Propaganda, developed the ComProp Navigator, a curated collection of resources for civil society organizations to consider when responding to disinformation issues.

Programs for Closed Online Spaces and Messaging Apps

Programs countering disinformation on applications that are 'closed' (or private) and encrypted networks must consider the difficulty in accessing user data and the privacy considerations in collecting this data.  

Disinformation campaigns are rapidly moving from the relatively public sphere of online social media and content platforms like Facebook, YouTube, and Twitter, to private messaging platforms such as WhatsApp, Line, Telegram, and SMS. Several of those platforms are encrypted, making it a challenge to track and prevent the spread of false content and amplification. In several instances, political parties have exploited private messaging to target supporters who then forward misleading messages to other supporters – giving little opportunity for independent or opposition actors to counteract or correct messaging. 

Several programmatic approaches have emerged to combat this challenge. In Taiwan, a civil society group created an initiative called “CoFacts”, to address the large scale spread of political disinformation on LINE. Messages can be forwarded to the CoFacts bot for fact checking by a team of volunteers; the CoFacts bot can also be added to private groups and will automatically share corrections if a fact-checked piece of false content is shared within the group. This preserves the privacy of the group writ large, while allowing for the monitoring and countering of false information. 

In several countries with contentious elections or political situations, Facebook (the parent company of WhatsApp) has limited the size of WhatsApp groups and the number of times a message can be forwarded, which reduces the ease and potential for virality on the platform. Another approach is to flood encrypted or private messaging services with accurate information. For example, the Taiwanese government has employed a number of comics and comedians to create fact-based, easy to forward content designed for virality. 

Programs on Data Harvesting, Ad Tech & Microtargeting

Programs countering the use of private user data in targeted disinformation campaigns are in their infancy as of this writing. However, these approaches are becoming increasingly important as this user data can be used to inform automated systems and ad buys in political campaigns. Programs include efforts to reverse engineer these systems to illuminate their ubiquity and effect. 

Data harvesting, advertising technology, and microtargeting increasingly feature prominently in parties’ mobilization and persuasion functions. Advertising tools allow political parties to tailor messages to small groups based on demographic, attitudinal, behavioral, and geographic characteristics gleaned from a variety of sources, including online behavior. This capacity to tailor political messages to smaller and smaller constituencies has important implications for democratic outcomes. Individual parties and candidates use this technology because it helps optimize their messaging. Socially, however, the adoption of this technology has two important consequences. 

First, it undercuts the interest aggregation function of political parties. Recall that democratic outcomes are most likely when parties effectively bundle disparate interests and policies under one brand for “sale” to broad swaths of voters. The microtargeted communications facilitated by advertising technology allow single parties or candidates to tailor messages directly to small groups. This approach may produce short-run gains in mobilization effectiveness at the expense of negotiating common policy priorities and building consensus around issues. Second, the adoption of this technology also facilitates the more precise targeting of disinformation, hate speech, harassment, and other nefarious tactics that parties or candidates might employ to activate their own supporters or suppress the engagement of supporters of their political opponents. Third, microtargeting effectively “hides'' content from the media, fact-checkers, or opposing parties who might otherwise be able to respond to or debate the information in question.12

DRG programs to encourage best practices in, and discourage abuses of, advertising technology have tended to lag on the adoption of these methods. One example, however, is the Institute of Mass Information in Ukraine which monitors social media platforms during elections. Their Executive Director noted in 2019 that Facebook was not particularly effective in addressing abusive political advertising, especially, “native” or “sponsored” content – political advertising disguised to look like news. In this case, Facebook's political advertising database was not useful to third-party monitors because the advertising content was so difficult to detect.13 This challenge provides one example of how innovations in advertising technology might undermine democratic outcomes, especially when they provide electoral benefits to individual parties or candidates; if political messages are disguised to look like factual information or news,  by precisely targeting to consumers attitudes, tastes, or behaviors, producers are more likely to manipulate citizens’ preferences than respond to them. 

Programs on Disinformation Content and Tactics

Programs examining disinformation content and tactics take on a wide variety of forms, whether simply collecting and analyzing the information or looking to infiltrate disinformation groups to study their methods. These approaches also play an important accountability function with respect to political parties. A focus on the content of disinformation may help citizens and CSOs clarify complex policy issues, reducing the space for parties and candidates to muddy the water. This approach sees either independent journalists, volunteers, or CSOs check the veracity of content, issue corrections, and – in some instances – work with social media companies to flag misleading content, limit its spread, and post the fact-checkers correction alongside a post. Some of these initiatives target political party or candidate content explicitly, while others look at the broader information ecosystem and fact-check stories based on their likely impact, spread, or a specific interest area.

Programs to develop fact-checking and verification outlets are rarely done in direct partnership with political party actors given that the approach requires political neutrality to be effective. However, hypothetically, these programs serve an important accountability function by acting on the incentives of political actors. A theory of change underlying these approaches is that if political actors, especially elected officials, know that false statements will be identified and corrected in a public forum, they may be less likely to engage in this behavior in the first place. Furthermore, fact-checking and verification outlets can provide accurate information to voters, who may then more effectively punish purveyors of disinformation at the ballot box. In Ukraine, for example, a program funded by the British Embassy and implemented by CASE Ukraine developed a set of information technology (IT) tools to enable citizens to analyze state budgets, in theory to develop critical thinking to counter politicians’ populist rhetoric on complex economic issues.14 Similarly, support for “explainer journalism” modeled on outlets like Vox.com in the United States has emerged as an approach to counter parties’ attempts to confuse citizens on complex policy issues. VoxUkraine, for example, supported by several international donors and implementing partners, provides both fact-checking, explainers, and analytical articles, especially on issues of economic reform in Ukraine.15

Program approaches have also drawn on pop culture, using satire and humor to encourage critical thinking around disinformation on complex issues. For example, Toronto TV, supported by the National Endowment for Democracy , Internews, and Pact, and inspired by American satirical takes on news and current events by Jon Stewart, John Oliver, Hassan Minhaj and others, use social media platforms and short video segments to challenge disinformation narratives propagated by prominent politicians.

A number of the interventions aimed at this issue have focused on countering disinformation ahead of election cycles and understanding the role of social media in spreading information during modern political campaigns, such as International IDEA’s roundtable on “Protecting Tunisian Elections,” held in 2019.  Similarly, the Belfer Center’s Cybersecurity Campaign Handbook, developed in partnership with NDI and IRI, provides context and clear guidance for campaigns facing a variety of cybersecurity issues, including disinformation and hacking. In terms of more concrete activities, DRG practitioners are building media monitoring into existing programs, including election observation. Grafting media monitoring onto existing program models and activities is a promising approach that could allow DRG programs to counter disinformation at scale. However, a potential drawback of this approach is that it focuses intervention on election cycles, while both the content and tactics transcend election cycles and operate over long periods of time.16 With this in mind, program designers and funders should consider support for efforts that bridge elections, and often, go further than the life of a standard DRG program.

Ultimately the real-world effects of content awareness and fact-checking programs are unclear. Academic research suggests that while fact-checking can change individual attitudes under very specific circumstances, it also has the potential to cause blowback or retrenchment – increased belief in the material that was fact-checked in the first place.17 Furthermore, there appears to be relatively little research on whether fact-checking deters the proliferation of disinformation among political elites. Anecdotally, fact-checking may lead politicians to attempt to discredit the source, rather than change their behavior.18 Ultimately, an accounting of any deterrent effect of fact-checking program approaches will require donors and implementers to evaluate the impact of these programs more rigorously.

In any case, the existence of factchecking, verification outlets, or awareness building alone is likely not sufficient to change political actors’ behavior regarding false statements or disinformation. In Ukraine, for example, research suggests that audiences for prominent fact-checking outlets were constrained geographically. The primary audiences tended to be younger, more urban, internet-connected, educated, and wealthy, and already inclined to monitor and sanction disinformation on their own.20 Fact-checking and verification programs should therefore pay close attention to deliberately expanding audiences to include populations that might otherwise lack the opportunity or resources to access high quality information. These programs should also consider efforts to make elected officials themselves conscious of their monitoring mechanisms and audience reach. If candidates or elected officials are confident that the products of these outlets are not accessible to, or used by, their specific constituencies, these programs will be less effective in serving an accountability function. 

Highlight


CEPPS research identified dozens of programs that support fact-checking outlets across countries. For specific examples, consult the program repository and the Poynter Institute International Fact-Checking Network.

Research Programs on Disinformation Vulnerability and Resilience

These programs focus on targets of disinformation, examining aspects of their background, the kinds of disinformation they respond to and other demographic factors to understand how they are susceptible to or can resist false content. For research programs with political party partners, these programs generally operate from a theory of change that hypothesizes that if there is a greater awareness of organization vulnerabilities to disinformation, then political party officials will be motivated to improve their party’s resilience. Two prominent examples of DRG programs that aim to provide research on vulnerability and resilience to policymakers, including elected officials and political parties, are IRI’s Beacon Project, and NDI’s INFO/tegrity Initiative.

IRI’s Beacon Project supports original research into disinformation vulnerability and resilience with public opinion research, analytical pieces, narrative monitoring, and mainstream and social media monitoring through in-house expertise and in collaboration with local partners in Europe. These research products are shared among broad coalitions of stakeholders and applied in programmatic responses to disinformation narratives and through engagement with policymakers at the local, national, and European Union levels. Similarly, NDI’s INFO/tegrity Initiative commissions original research on vulnerabilities to disinformation, which in turn strengthens programming to build resilience, in partnership with political parties, social media platforms, and technology firms. Finally, DRG practitioners are increasingly working with academic partners to produce research on disinformation to improve programmatic approaches to improving resilience. For example, the Defending Digital Democracy project at Harvard University’s Belfer Center connects academic research on disinformation threats and vulnerability to governments, CSOs, technology firms, and political organizations. 

Programs for Understanding the Spread of Disinformation Online

Researchers and programmers look to understand the roots of disinformation campaigns online by studying datasets of social media content to understand the virality of certain kinds of content, communities, and individual users' roles.

Disinformation is a cheap, effective, campaign tactic that usually goes entirely undetected, making the reputational cost for political party use of disinformation effectively nil. Several programs have recently emerged that track the use of content farms, false amplification, buying of followers/likes, troll armies, and other tactics by political actors. This programmatic approach has been supported by the growing accessibility of digital forensics research skills; increasing awareness among local actors of the role disinformation can play in political campaigns; and, due to concerns about malign foreign disinformation during elections, the investment made in content archiving technologies, social media mapping and graphing, and media monitoring platforms. This approach focuses on the behavior component of disinformation – it does not attempt to assess the veracity of the content being produced or amplified. 

The implicit theory of change behind this work is that exposing the use of disinformation by political parties during campaigns will have some reputational cost, reducing their ability to deploy disinformation tactics with impunity and damaging the electoral prospects of those who do. 

Given that this approach is content agnostic, it is the one that most lends itself to changes in election rules. By exposing the tactics that political campaigns use that are most harmful to democratic integrity, election management bodies can explicitly forbid the use of those tactics during an election period.

Programs Combating Hate Speech, Incitement, and Polarization

A component of disinformation and information integrity is the use of hate speech, often in combination with false information, to incite, suppress, or polarize users. This kind of program often exists separately from others focused on disinformation but could be evaluated as another potential response.

Hate speech, stereotyping, rumors, trolling, online harassment, and doxing are mechanisms through which parties might perform their mobilization function. Particularly in environments with pronounced political, social, or economic cleavages, the propagation of inflammatory information may serve to activate supporters or demobilize supporters of opposition parties. Both with respect to domestic and foreign campaigns, disinformation in this vein attempts to exacerbate these existing cleavages. Marginalized groups, including (but not limited to) women, ethnic, religious, or linguistic minorities, and LGBTI citizens are common targets of these campaigns, particularly where the perpetrators aim to scapegoat vulnerable groups for policy failures, or where perpetrators aim to deter participation of these groups in the political process, either by candidacy or voting. Indeed, across contexts, online violence against women, including hate speech and threats, infliction of embarrassment and reputational risks, and sexualized distortion, constituted a significant barrier to women’s participation in the political process by causing silence, self-censorship, and withdrawal from political engagement, both for the immediate targets, and by deterring women’s participation generally.21

Furthermore, these tactics can also help mobilize supporters by drawing on fear or anxiety around changing social hierarchies. Importantly, political communications framed as stereotypes can increase acceptance of false information about the group being stereotyped.22 This appeal of stereotypes creates a powerful incentive for political parties and politicians to attack vulnerable groups with disinformation in ways that are not experienced by members of favored in-groups. 

Programmatically, DRG programs can counteract these effects by first acknowledging that disinformation disproportionately harms groups that have been historically marginalized in specific contexts, and by encouraging political party partners to engage in messaging that might improve supporters’ attitudes toward vulnerable groups.23 For example, the Westminster Foundation for Democracy Uganda office organized an e-conference for over 150 women candidates for elected office in Uganda, with a focus on navigating social barriers to political participation, including misinformation and cyberbullying. Similarly, the Women’s Democracy Network is a global network of chapters that share best practices on identifying and overcoming barriers to women’s political participation. NDI has several programs geared toward identifying and overcoming social barriers to participation within political parties specifically, including the Win with Women initiative and the #NotTheCost campaign, designed to mitigate discrimination, harassment, violence, and other forms of backlash against women’s political participation. Similarly, NDI’s safety planning tool provides a mechanism through which women who participate in politics can privately assess their security and make a plan to increase their safety, especially with respect to harassment, public shaming, threats and abuse, physical and sexual assault, economic violence, and pressure to leave politics, both in online and offline spaces. 

Key Resource


Network Approaches to Scaling Best Practices

Poynter Foundation International Fact-Checking Network: The International Fact-Checking Network (IFCN) is a forum for fact-checkers worldwide hosted by the Poynter Institute for Media Studies. These organizations fact-check statements by public figures, major institutions, and other widely circulated claims of interest to society. The IFCN Model is further explored in the norms and standards section and :

  • Monitors trends and formats in fact-checking worldwide, publishing regular articles on the dedicated Poynter.org channel.

  • Provides training resources for fact-checkers.

  • Supports collaborative efforts in international fact-checking, including fellowships.

  • Convenes a yearly conference (Global Fact).

  • Is the home of the fact-checkers’ code of principles.

Policy Recommendations and Reform/ Sharing and Scaling Good Practice in Programmatic Responses

Programs that address policies around online systems, social media, and the internet can help define new rules that can reduce the impact of disinformation. A key role for DRG donors and implementing partners is to use their convening power to connect diverse stakeholders to share lessons learned and best practices, within and across countries and programs. It is important to note that many of the programs discussed above also have an important convening function – they are often deliberately designed to share best practices between key stakeholders, including politicians and political organizations, elected officials, civil servants, CSOs, media outlets, and technology firms. These convening activities hypothetically could improve outcomes through two mechanisms. First, the exchange of lessons learned and best practices could increase the skills, knowledge, capacity, or willingness of political party partners to refrain from the use of disinformation, or to take steps to improve party resilience. Second, these convening activities could serve an important coordination function. Recall that one important implication of thinking of disinformation as a tragedy of the commons is that political parties and candidates might be willing to forgo the political advantages of disinformation if they could be confident their political opponents would do the same. Programs that provide regular, scheduled, ongoing interaction between political opponents could increase confidence that political competitors are not cheating. 

IFES’s Regional Elections Administration and Political Process Strengthening (REAPPS I and II) programs in Central and Eastern Europe provide an example of how DRG support programs can facilitate this kind of coordination over a relatively long period of time. The program’s thematic focus on information security and explicit attention to cross-sectoral and cross-border networking addresses both technical approaches and underlying political incentives. 


Design 4 Democracy Coalition: The D4D Coalition aims to ensure that information technology and social media play a proactive role in supporting democracy and human rights globally. The coalition partners create programs and trainings and coordinate between members to promote the safe and responsible use of technology to advance open, democratic politics and accountable, transparent governments.

Policy Recommendations

 

  • When implementing these programmatic approaches, consider political incentives in addition to technical solutions. 
  • Consider an inclusive, gender-sensitive landscape analysis or a political economy analysis to identify how the structure of social cleavages creates incentives and opportunities for candidates or political parties to exploit context-specific norms and stereotypes around gender identity, ethnic or religious identities, sexual orientation, and groups that have been historically marginalized within that context. 
  • Programmatic interventions should account for diverging interests within parties – parties are composed of functionaries, elected officials, interest groups, formal members, supporters, and voters – each of which may have unique incentives to propagate or take advantage of disinformation. 
  • The collective action problem of disinformation makes one-off interactions with single partners difficult – consider implementing technical programs with regular, ongoing interaction between all relevant parties to increase confidence that competitors are not “cheating.”
  • Relatedly, use convening power of donors or implementing organizations to bring relevant actors to the table. 
  • Consider pacts or pledges, especially in pre-election periods, in which all major parties commit to mitigating disinformation. Importantly, the agreement itself is cheap talk, but pay careful attention to design of institutions, both within the pact and externally, to monitoring compliance.
  • There is limited evidence for effectiveness of common counter-disinformation program approaches with a focus on political parties and political competition, including media literacy, fact-checking, and content labeling. That there is limited evidence does not necessarily imply these programs do not work, only that DRG funders and implementing partners should invest in the rigorous evaluation of these programs to determine their impact on key outcomes like political knowledge, attitudes and believes, polarization, propensity to engage in hate speech or harassment, and political behavior like voting, and to identify what design elements distinguish effective programs from ineffective ones. 
  • DRG program responses have tended to lag political parties’ use of sophisticated technologies like data harvesting, microtargeting, deep fakes and AI generated content. Funders and implementing partners should consider the use of innovation funds to generate concepts for responses to mitigate the potentially harmful effects of these tools, and to rigorously evaluate impact.