2. Promoting Information Integrity in Multi-Party Political Systems

Updated On
Mar 31, 2021

Over the last few years, DRG practitioners have implemented a wide range of programmatic approaches to reduce both the impact and use of disinformation and related tactics during elections. Most DRG programmatic approaches look at the overarching information ecosystem, which has incidental impacts for political party behavior and the impact of disinformation as a campaign tactic. However, the last few years have seen an increasing number of interventions targeted specifically at political parties. These programmatic approaches operate on a wide variety of theories of changes, with various implicit or explicit assumptions about incentive structures for political parties, voters, and other electoral information actors. Keeping in mind the relevant functions of political parties, and the potential benefits to disinformation agents, including domestic actors, the following section outlinesbroad programmatic approaches that have been applied to assist political party partners in building resilience to disinformation. Generally, DRG programs tend to operate from a similar coherent logic – that if partners and/or their voters can identify disinformation and have the technical capacity to deter or respond to it, they will improve the information environment, leading to more responsiveness and accountability. 

Paragraphs

Highlight


One promising approach to media literacy, for example, was partnership with state educational institutions to implement media literacy at scale. The IREX Learn2Discern campaign implemented media literacy trainings through community centers, schools, and libraries. Rigorous evaluations of the Learn2Discern program in Ukraine have found that both youth and adult learners were significantly more likely to be able to identify false news stories from true news, and that short media literacy videos and source labels mitigated the impact of Russian propaganda content

Digital Media Literacy Programs

The European Union’s Joint Research Commission definition on digital and information literacy through its Digital Competence Framework is key to understanding the effects of digital literacy programs. Their definition of digital competency is a Venn diagram of intersecting literacies including media, information, internet and ICT, which touches on different aspects of digital competency, from using the internet and understanding information in the abstract, to using ICTs in terms of hardware and software and the media in different forms. All these literacies are important in understanding how programs can address disinformation vulnerability and resilience. 

One approach to election related disinformation is to increase public awareness of the what, where, why and how of disinformation. Education campaigns vary in scope - both who they reach and what they address - and can be run by several different actors, including CSOs, schools, faith-based institutions, technology companies, and governments. The theory of change is that if the electorate is aware of the presence of disinformation and the ways in which it operates, then they will be more critical of the information they encounter and that this will then have less of an impact on their political views. Broadly, this approach is among the likeliest to have positive ramifications outside of election integrity and can – where implemented at scale – reduce the impact of health misinformation and susceptibility to cybercrime. 

In this sense, media literacy programs can operate on the “interest aggregation” and “mobilization” functions of political parties by mitigating the impact of disinformation on polarization, especially among strong partisans.

AI and Disinformation Programs

This program approach includes assistance to political party partners in responding to the use of a range of artificial intelligence (AI) applications, including automated artificial amplification, deep fakes, and the manipulation and modification of audio and videos. Increasingly these approaches encompass the use of large networks of automated accounts with more intelligently informed content, shaped by user responses, personal data and other metrics.

Efforts to combat disinformation have, as of writing, largely focused on the human-led creation of misleading content; false amplification using fake accounts, paid followers, and automated bots; and paid promotion of misleading content microtargeted at users based on their probable susceptibility to a given narrative. This focus has mirrored the widespread accessibility and scalability of the technologies underpinning disinformation – bots, content farms, fake followers, and microtargeted ads that have radically changed the way election and political information is created and distributed. For political parties, these technologies facilitate the social nature of their core functions, particularly by artificially signaling “social proof” – that a proposed policy or candidate is more widely supported than it is. These technologies also game trending and recommendations algorithms, increasing exposure for information (or disinformation) that otherwise may not have been widely available.11 Artificial amplification, therefore, helps parties and candidates manipulate citizen beliefs, rather than responding to constituent interests directly. While we do not expect that there will be any major shifts in the vectors for election-related disinformation, the DRG community is increasingly concerned about the further automation of content creation and distribution and the ease of access to “deep fake” manipulation, in which a video or audio is created of a person saying or doing something they never said or did.  

Technological approaches have been developed to identify areas where image and audio have been altered by detecting anomalies in pixelation or audio waves. At present, those are not deployed on a systematic basis. Given that deep fakes and AI-generated content have not yet begun to play a major role in campaigns or election integrity, case studies for DRG programmatic interventions have been limited. 

Although DRG programmatic responses have been limited, research organizations have established knowledge repositories on problems of computational propaganda. For example, the Oxford Internet Institute, through the Program on Computational Propaganda, developed the ComProp Navigator, a curated collection of resources for civil society organizations to consider when responding to disinformation issues.

Programs for Closed Online Spaces and Messaging Apps

Programs countering disinformation on applications that are 'closed' (or private) and encrypted networks must consider the difficulty in accessing user data and the privacy considerations in collecting this data.  

Disinformation campaigns are rapidly moving from the relatively public sphere of online social media and content platforms like Facebook, YouTube, and Twitter, to private messaging platforms such as WhatsApp, Line, Telegram, and SMS. Several of those platforms are encrypted, making it a challenge to track and prevent the spread of false content and amplification. In several instances, political parties have exploited private messaging to target supporters who then forward misleading messages to other supporters – giving little opportunity for independent or opposition actors to counteract or correct messaging. 

Several programmatic approaches have emerged to combat this challenge. In Taiwan, a civil society group created an initiative called “CoFacts”, to address the large scale spread of political disinformation on LINE. Messages can be forwarded to the CoFacts bot for fact checking by a team of volunteers; the CoFacts bot can also be added to private groups and will automatically share corrections if a fact-checked piece of false content is shared within the group. This preserves the privacy of the group writ large, while allowing for the monitoring and countering of false information. 

In several countries with contentious elections or political situations, Facebook (the parent company of WhatsApp) has limited the size of WhatsApp groups and the number of times a message can be forwarded, which reduces the ease and potential for virality on the platform. Another approach is to flood encrypted or private messaging services with accurate information. For example, the Taiwanese government has employed a number of comics and comedians to create fact-based, easy to forward content designed for virality. 

Programs on Data Harvesting, Ad Tech & Microtargeting

Programs countering the use of private user data in targeted disinformation campaigns are in their infancy as of this writing. However, these approaches are becoming increasingly important as this user data can be used to inform automated systems and ad buys in political campaigns. Programs include efforts to reverse engineer these systems to illuminate their ubiquity and effect. 

Data harvesting, advertising technology, and microtargeting increasingly feature prominently in parties’ mobilization and persuasion functions. Advertising tools allow political parties to tailor messages to small groups based on demographic, attitudinal, behavioral, and geographic characteristics gleaned from a variety of sources, including online behavior. This capacity to tailor political messages to smaller and smaller constituencies has important implications for democratic outcomes. Individual parties and candidates use this technology because it helps optimize their messaging. Socially, however, the adoption of this technology has two important consequences. 

First, it undercuts the interest aggregation function of political parties. Recall that democratic outcomes are most likely when parties effectively bundle disparate interests and policies under one brand for “sale” to broad swaths of voters. The microtargeted communications facilitated by advertising technology allow single parties or candidates to tailor messages directly to small groups. This approach may produce short-run gains in mobilization effectiveness at the expense of negotiating common policy priorities and building consensus around issues. Second, the adoption of this technology also facilitates the more precise targeting of disinformation, hate speech, harassment, and other nefarious tactics that parties or candidates might employ to activate their own supporters or suppress the engagement of supporters of their political opponents. Third, microtargeting effectively “hides'' content from the media, fact-checkers, or opposing parties who might otherwise be able to respond to or debate the information in question.12

DRG programs to encourage best practices in, and discourage abuses of, advertising technology have tended to lag on the adoption of these methods. One example, however, is the Institute of Mass Information in Ukraine which monitors social media platforms during elections. Their Executive Director noted in 2019 that Facebook was not particularly effective in addressing abusive political advertising, especially, “native” or “sponsored” content – political advertising disguised to look like news. In this case, Facebook's political advertising database was not useful to third-party monitors because the advertising content was so difficult to detect.13 This challenge provides one example of how innovations in advertising technology might undermine democratic outcomes, especially when they provide electoral benefits to individual parties or candidates; if political messages are disguised to look like factual information or news,  by precisely targeting to consumers attitudes, tastes, or behaviors, producers are more likely to manipulate citizens’ preferences than respond to them. 

Programs on Disinformation Content and Tactics

Programs examining disinformation content and tactics take on a wide variety of forms, whether simply collecting and analyzing the information or looking to infiltrate disinformation groups to study their methods. These approaches also play an important accountability function with respect to political parties. A focus on the content of disinformation may help citizens and CSOs clarify complex policy issues, reducing the space for parties and candidates to muddy the water. This approach sees either independent journalists, volunteers, or CSOs check the veracity of content, issue corrections, and – in some instances – work with social media companies to flag misleading content, limit its spread, and post the fact-checkers correction alongside a post. Some of these initiatives target political party or candidate content explicitly, while others look at the broader information ecosystem and fact-check stories based on their likely impact, spread, or a specific interest area.

Programs to develop fact-checking and verification outlets are rarely done in direct partnership with political party actors given that the approach requires political neutrality to be effective. However, hypothetically, these programs serve an important accountability function by acting on the incentives of political actors. A theory of change underlying these approaches is that if political actors, especially elected officials, know that false statements will be identified and corrected in a public forum, they may be less likely to engage in this behavior in the first place. Furthermore, fact-checking and verification outlets can provide accurate information to voters, who may then more effectively punish purveyors of disinformation at the ballot box. In Ukraine, for example, a program funded by the British Embassy and implemented by CASE Ukraine developed a set of information technology (IT) tools to enable citizens to analyze state budgets, in theory to develop critical thinking to counter politicians’ populist rhetoric on complex economic issues.14 Similarly, support for “explainer journalism” modeled on outlets like Vox.com in the United States has emerged as an approach to counter parties’ attempts to confuse citizens on complex policy issues. VoxUkraine, for example, supported by several international donors and implementing partners, provides both fact-checking, explainers, and analytical articles, especially on issues of economic reform in Ukraine.15

Program approaches have also drawn on pop culture, using satire and humor to encourage critical thinking around disinformation on complex issues. For example, Toronto TV, supported by the National Endowment for Democracy , Internews, and Pact, and inspired by American satirical takes on news and current events by Jon Stewart, John Oliver, Hassan Minhaj and others, use social media platforms and short video segments to challenge disinformation narratives propagated by prominent politicians.

A number of the interventions aimed at this issue have focused on countering disinformation ahead of election cycles and understanding the role of social media in spreading information during modern political campaigns, such as International IDEA’s roundtable on “Protecting Tunisian Elections,” held in 2019.  Similarly, the Belfer Center’s Cybersecurity Campaign Handbook, developed in partnership with NDI and IRI, provides context and clear guidance for campaigns facing a variety of cybersecurity issues, including disinformation and hacking. In terms of more concrete activities, DRG practitioners are building media monitoring into existing programs, including election observation. Grafting media monitoring onto existing program models and activities is a promising approach that could allow DRG programs to counter disinformation at scale. However, a potential drawback of this approach is that it focuses intervention on election cycles, while both the content and tactics transcend election cycles and operate over long periods of time.16 With this in mind, program designers and funders should consider support for efforts that bridge elections, and often, go further than the life of a standard DRG program.

Ultimately the real-world effects of content awareness and fact-checking programs are unclear. Academic research suggests that while fact-checking can change individual attitudes under very specific circumstances, it also has the potential to cause blowback or retrenchment – increased belief in the material that was fact-checked in the first place.17 Furthermore, there appears to be relatively little research on whether fact-checking deters the proliferation of disinformation among political elites. Anecdotally, fact-checking may lead politicians to attempt to discredit the source, rather than change their behavior.18 Ultimately, an accounting of any deterrent effect of fact-checking program approaches will require donors and implementers to evaluate the impact of these programs more rigorously.

In any case, the existence of factchecking, verification outlets, or awareness building alone is likely not sufficient to change political actors’ behavior regarding false statements or disinformation. In Ukraine, for example, research suggests that audiences for prominent fact-checking outlets were constrained geographically. The primary audiences tended to be younger, more urban, internet-connected, educated, and wealthy, and already inclined to monitor and sanction disinformation on their own.20 Fact-checking and verification programs should therefore pay close attention to deliberately expanding audiences to include populations that might otherwise lack the opportunity or resources to access high quality information. These programs should also consider efforts to make elected officials themselves conscious of their monitoring mechanisms and audience reach. If candidates or elected officials are confident that the products of these outlets are not accessible to, or used by, their specific constituencies, these programs will be less effective in serving an accountability function. 

Highlight


CEPPS research identified dozens of programs that support fact-checking outlets across countries. For specific examples, consult the program repository and the Poynter Institute International Fact-Checking Network.

Research Programs on Disinformation Vulnerability and Resilience

These programs focus on targets of disinformation, examining aspects of their background, the kinds of disinformation they respond to and other demographic factors to understand how they are susceptible to or can resist false content. For research programs with political party partners, these programs generally operate from a theory of change that hypothesizes that if there is a greater awareness of organization vulnerabilities to disinformation, then political party officials will be motivated to improve their party’s resilience. Two prominent examples of DRG programs that aim to provide research on vulnerability and resilience to policymakers, including elected officials and political parties, are IRI’s Beacon Project, and NDI’s INFO/tegrity Initiative.

IRI’s Beacon Project supports original research into disinformation vulnerability and resilience with public opinion research, analytical pieces, narrative monitoring, and mainstream and social media monitoring through in-house expertise and in collaboration with local partners in Europe. These research products are shared among broad coalitions of stakeholders and applied in programmatic responses to disinformation narratives and through engagement with policymakers at the local, national, and European Union levels. Similarly, NDI’s INFO/tegrity Initiative commissions original research on vulnerabilities to disinformation, which in turn strengthens programming to build resilience, in partnership with political parties, social media platforms, and technology firms. Finally, DRG practitioners are increasingly working with academic partners to produce research on disinformation to improve programmatic approaches to improving resilience. For example, the Defending Digital Democracy project at Harvard University’s Belfer Center connects academic research on disinformation threats and vulnerability to governments, CSOs, technology firms, and political organizations. 

Programs for Understanding the Spread of Disinformation Online

Researchers and programmers look to understand the roots of disinformation campaigns online by studying datasets of social media content to understand the virality of certain kinds of content, communities, and individual users' roles.

Disinformation is a cheap, effective, campaign tactic that usually goes entirely undetected, making the reputational cost for political party use of disinformation effectively nil. Several programs have recently emerged that track the use of content farms, false amplification, buying of followers/likes, troll armies, and other tactics by political actors. This programmatic approach has been supported by the growing accessibility of digital forensics research skills; increasing awareness among local actors of the role disinformation can play in political campaigns; and, due to concerns about malign foreign disinformation during elections, the investment made in content archiving technologies, social media mapping and graphing, and media monitoring platforms. This approach focuses on the behavior component of disinformation – it does not attempt to assess the veracity of the content being produced or amplified. 

The implicit theory of change behind this work is that exposing the use of disinformation by political parties during campaigns will have some reputational cost, reducing their ability to deploy disinformation tactics with impunity and damaging the electoral prospects of those who do. 

Given that this approach is content agnostic, it is the one that most lends itself to changes in election rules. By exposing the tactics that political campaigns use that are most harmful to democratic integrity, election management bodies can explicitly forbid the use of those tactics during an election period.

Programs Combating Hate Speech, Incitement, and Polarization

A component of disinformation and information integrity is the use of hate speech, often in combination with false information, to incite, suppress, or polarize users. This kind of program often exists separately from others focused on disinformation but could be evaluated as another potential response.

Hate speech, stereotyping, rumors, trolling, online harassment, and doxing are mechanisms through which parties might perform their mobilization function. Particularly in environments with pronounced political, social, or economic cleavages, the propagation of inflammatory information may serve to activate supporters or demobilize supporters of opposition parties. Both with respect to domestic and foreign campaigns, disinformation in this vein attempts to exacerbate these existing cleavages. Marginalized groups, including (but not limited to) women, ethnic, religious, or linguistic minorities, and LGBTI citizens are common targets of these campaigns, particularly where the perpetrators aim to scapegoat vulnerable groups for policy failures, or where perpetrators aim to deter participation of these groups in the political process, either by candidacy or voting. Indeed, across contexts, online violence against women, including hate speech and threats, infliction of embarrassment and reputational risks, and sexualized distortion, constituted a significant barrier to women’s participation in the political process by causing silence, self-censorship, and withdrawal from political engagement, both for the immediate targets, and by deterring women’s participation generally.21

Furthermore, these tactics can also help mobilize supporters by drawing on fear or anxiety around changing social hierarchies. Importantly, political communications framed as stereotypes can increase acceptance of false information about the group being stereotyped.22 This appeal of stereotypes creates a powerful incentive for political parties and politicians to attack vulnerable groups with disinformation in ways that are not experienced by members of favored in-groups. 

Programmatically, DRG programs can counteract these effects by first acknowledging that disinformation disproportionately harms groups that have been historically marginalized in specific contexts, and by encouraging political party partners to engage in messaging that might improve supporters’ attitudes toward vulnerable groups.23 For example, the Westminster Foundation for Democracy Uganda office organized an e-conference for over 150 women candidates for elected office in Uganda, with a focus on navigating social barriers to political participation, including misinformation and cyberbullying. Similarly, the Women’s Democracy Network is a global network of chapters that share best practices on identifying and overcoming barriers to women’s political participation. NDI has several programs geared toward identifying and overcoming social barriers to participation within political parties specifically, including the Win with Women initiative and the #NotTheCost campaign, designed to mitigate discrimination, harassment, violence, and other forms of backlash against women’s political participation. Similarly, NDI’s safety planning tool provides a mechanism through which women who participate in politics can privately assess their security and make a plan to increase their safety, especially with respect to harassment, public shaming, threats and abuse, physical and sexual assault, economic violence, and pressure to leave politics, both in online and offline spaces. 

Key Resource


Network Approaches to Scaling Best Practices

Poynter Foundation International Fact-Checking Network: The International Fact-Checking Network (IFCN) is a forum for fact-checkers worldwide hosted by the Poynter Institute for Media Studies. These organizations fact-check statements by public figures, major institutions, and other widely circulated claims of interest to society. The IFCN Model is further explored in the norms and standards section and :

  • Monitors trends and formats in fact-checking worldwide, publishing regular articles on the dedicated Poynter.org channel.

  • Provides training resources for fact-checkers.

  • Supports collaborative efforts in international fact-checking, including fellowships.

  • Convenes a yearly conference (Global Fact).

  • Is the home of the fact-checkers’ code of principles.

Policy Recommendations and Reform/ Sharing and Scaling Good Practice in Programmatic Responses

Programs that address policies around online systems, social media, and the internet can help define new rules that can reduce the impact of disinformation. A key role for DRG donors and implementing partners is to use their convening power to connect diverse stakeholders to share lessons learned and best practices, within and across countries and programs. It is important to note that many of the programs discussed above also have an important convening function – they are often deliberately designed to share best practices between key stakeholders, including politicians and political organizations, elected officials, civil servants, CSOs, media outlets, and technology firms. These convening activities hypothetically could improve outcomes through two mechanisms. First, the exchange of lessons learned and best practices could increase the skills, knowledge, capacity, or willingness of political party partners to refrain from the use of disinformation, or to take steps to improve party resilience. Second, these convening activities could serve an important coordination function. Recall that one important implication of thinking of disinformation as a tragedy of the commons is that political parties and candidates might be willing to forgo the political advantages of disinformation if they could be confident their political opponents would do the same. Programs that provide regular, scheduled, ongoing interaction between political opponents could increase confidence that political competitors are not cheating. 

IFES’s Regional Elections Administration and Political Process Strengthening (REAPPS I and II) programs in Central and Eastern Europe provide an example of how DRG support programs can facilitate this kind of coordination over a relatively long period of time. The program’s thematic focus on information security and explicit attention to cross-sectoral and cross-border networking addresses both technical approaches and underlying political incentives. 


Design 4 Democracy Coalition: The D4D Coalition aims to ensure that information technology and social media play a proactive role in supporting democracy and human rights globally. The coalition partners create programs and trainings and coordinate between members to promote the safe and responsible use of technology to advance open, democratic politics and accountable, transparent governments.

Footnotes

11. Bret Schafer, personal communication with the Author, January 2021.

12. Bret Schafer, personal communication with the authors, January 2021.

13. Oksana Romaniuk, Executive Director, Institute of Mass Information. Interview with Dina Sadek and Bret Barrowman. Kyiv, Ukraine. October 2019.

14. Alina Chubko (Project Coordinator) and Anonymous Officer, Embassy of the Czech Republic. Interview with Dina Sadek and Bret Barrowman. Kyiv, Ukraine. October 2019. 

15. Ibid.

16. Bret Schafer, personal communication with the author, January 2021.

17. Tucker, et al, pp. 57; 59.

18. Maksym Subenko (Head of VoxCheck) and Ilona Sologoub (Scientific Editor and Policy Projects Director, VoxUkraine; Director of Political and Economic Research, Kyiv School of Economics. Interview with Dina Sadek and Bret Barrowman. Kyiv Ukraine. October 2019.

19. Ibid.

20. Ibid.

21. Zeiter, et al, “Tweets that Chill,” pp. 4.

22. Tucker, et al. pp. 40.

23. For a survey of existing research on the conditions under which messages can mitigate negative attitudes toward marginalized groups, see Ibid. pp. 43.