3. Efforts to Promote Resiliency, Digital Literacy, and Stronger Community Responses to Disinformation

Updated On
Apr 02, 2021

Collective action, community partnerships, and civil society engagement are important aspects of the private sector approach to addressing disinformation. These include individual companies’ investments, engagement, and partnerships, as well as collaborative initiatives involving multiple companies. This section examines partnerships and initiatives undertaken by particular companies, as well as cross-sectoral and multi-stakeholder collaborations to combat disinformation.

Paragraphs

A. Company Partnerships and Initiatives

All major technology companies, such as Facebook, Google, and Twitter, have collaborated with civil society and others to combat disinformation, hate speech, and other harmful forms of content on their platforms. This section reviews some of the key initiatives they have undertaken to work with outside groups, particularly civil society organizations, on information space problems collectively.

1. Facebook 

Facebook has developed a number of public-facing partnerships and initiatives aimed at supporting civil society and other stakeholders working to promote information integrity.  Among its most notable announcements, Facebook has inaugurated an independent Oversight Board.  The Board is composed of technology, human rights, and policy experts who have been given the authority to review difficult cases of speech that cause online harassment, hate, and spread disinformation and misinformation. As of the date of publishing this guidebook, the Oversight Board has reviewed and made a determination on content moderation cases, including cases in China, Brazil, Malaysia, and the United States. This is significant, as the oversight board takes into account human rights, legal, and impact on society in reviewing difficult cases the platform may not be in the position to address. 

The company has also invested in country-specific and regional initiatives. For example, WeThink Digital is a Facebook initiative to foster digital literacy through partnerships with civil society organizations, academia, and government agencies in various Asia-Pacific countries such as Indonesia, Myanmar, New Zealand, the Philippines, Sri Lanka, and Thailand. It includes public guides to user actions such as deactivating an account, digital learning modules, videos, and other pedagogical resources. In the context of elections, in particular, Facebook has also developed partnerships with election monitoring bodies, law enforcement, and other government institutions dedicated to the investigation of campaigns during electoral processes through the creation of a “war room” of dedicated staff in certain cases, such as the European Union, Ukraine, Ireland, Singapore, Brazil, and for the 2020 U.S. electionwhich they have since closed. According to the NDI case study on the role of social media platforms in enforcing policy decisions during elections, both Facebook and Twitter worked with the National Electoral Council (CNE) in Colombia during the electoral process. 

In some countries, Facebook is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on the platform. As part of these efforts, in countries such as Colombia, Indonesia, Ukraine, as well as various members of the EU and the United States, Facebook has commissioned groups––through what is described as “a thorough and rigorous application process" established by the IFCN2––to become trusted fact-checkers who vet content, provide input into the algorithms that define the news feed, and downgrade and flag content that is identified as false. In Colombia, for example––where partners include AFP Colombia, ColombiaCheck, and La Silla Vacia––a representative from one of these partners reflected on the value of working with Facebook and platform's more broadly: "I think the most important thing is to talk more closely with other platforms because the way to widen our reach is to work with them. Facebook has its problems but it reaches a lot of people and especially reaches the people that have shared false information, and if  we could do something like that with Twitter, Instagram, or WhatsApp it would be great; that is the ideal next step for me."3 Groups from more than 80 countries have partnered with Facebook in this way, underscoring the broad scope of this effort. 

 

Highlight


In Focus: Facebook’s Social Science One Engagement

Facebook has supported the development of Social Science One, a consortium of universities and research groups that have been working to understand various aspects of the online world, including disinformation, hate speech, and computational propaganda. This is also supported by foundations including the John and Laura Arnold Foundation, the Democracy Fund, the William and Flora Hewlett Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Omidyar Network, the Sloan Foundation, and Children’s Investment Fund Foundation. The project was announced and launched in July 2018. Notably, all but three of the projects are focused on the developed world, and of those three, two projects are in Chile and one in Brazil. Through this consortium, the platform has enabled access to a URLs Data Set of widely shared links that is otherwise unavailable to the wider research community.


Facebook received criticism on the program because of the slow speed of implementation, the release and management of research data, and the negotiation of other complicated issues.  In all aspects of collaboration with platforms, agreement on data sharing and management are critical components of projects and certainly must be negotiated carefully to avoid sharing private user information. The misuse of such data as happened during the Cambridge Analytica scandal should be avoided. The data was later used by private companies to model voter behavior and target advertisements with psychographic and other information from these profiles, creating huge questions about the use of private user data in campaigns and elections. It is important to highlight that the history of this project helped set the terms for research collaboration with Facebook going forward.

2. WhatsApp

While it is a closed platform, WhatsApp has supported researchers in developing studies of its platform as one of the principal means of community engagement. The studies include an interesting range of potential methodology and show how enhanced access can lead to interesting and important results for understanding the closed platform, especially how it is used in lesser-seen or known contexts. Many countries and regions are a black box, especially at a local level. Groups are closed, the platform is encrypted, and it is difficult to see and understand anything in terms of content moderation. 

Abuse and online manipulation of WhatsApp through automated networks are common in many places. Local languages, dialects, and slang are not well known to moderators from different regions and countries. Violence against women online, in politics and elections, can have serious impacts on the political participation of the targeted individuals, we well as a chilling effect on the participation of women more broadly, and monitoring for hate speech should seek to understand methods of tracking local lexicons. The CEPPS partners have developed methodologies for tracking online hate speech against women and other marginalized groups, such as IFES's Violence Against Women in Elections (VAWIE) framework, or NDI's Votes without Violence Toolkit and the Addressing Online Misogyny and Gendered Disinformation: A How-To Guide, as well as a social media analysis tool developed jointly through CEPPS that describes methodologies for building lexicons in local contexts4.

In many cases, there simply are not enough resources to hire even minimal levels of moderators and technologists to deal with what is happening. This creates issues for content moderation, reporting, and algorithmic forms of detection and machine learning to inform these systems.  In many cases, moderation efforts are up against information attacks and coordinated inauthentic behavior that go beyond ordinary manipulation and can be sponsored by private or public authorities with deep pockets. In Brazil, WhatsApp's program supported studies of its election from top researchers in the field. Researchers at the Universities of Syracuse and Minas Gerais studied user information sharing and compared it to voter behavior, while others from the Institute of Technology and Society in Rio looked at methods for training people in media literacy through the platform.

WhatsApp has supported research on the platform and enabled access to its business API in certain cases, such as the First Draft/Comprova project in Brazil. It has also financially supported groups such as the Center for Democracy and Development and the University of Birmingham to pioneer research on the platform in Nigeria.

3. Twitter

Twitter has taken a more comprehensive approach to releasing data than any other company. Since 2018, the company has made available comprehensive datasets of state-linked information operations that it has removed. Rather than providing samples or access to only a small number of researchers, Twitter established a public archive of all Tweets and related content that it has removed. The archive now runs into hundreds of millions of Tweets and several terabytes of media. 

This archive has enabled a wide range of independent research, as well as collaboration with expert organizations. In 2020, the company partnered with the Carnegie Partnership for Countering Influence Operations (PCIO) to co-host a series of virtual workshops to support an open exchange of ideas among the research community regarding how IO can be better understood, analyzed, and mitigated. Twitter’s API is a unique source of data for the academic community, and the company launched a dedicated academic API product in 2021. 

More broadly, Twitter collaborates frequently with and has provided grants to support a number of organizations working to promote information integrity. Just like Facebook, the company has worked closely with research partners like the Stanford Internet Observatory, Graphika, and the Atlantic Council Digital Forensic Research Lab on datasets related to the networks detected and removed from their platform.  The platform has also collaborated with the Oxford Internet Institute’s Computational Propaganda Project to analyze information operation activities.

 

4. Microsoft

Microsoft has initiated the Defending Democracy Program, partnering with various civil society, private sector, and academic groups working on cybersecurity, disinformation, and civic technology issues. As part of this initiative, starting in 2018, Microsoft partnered with Newsguard, a plug-in to browsers such as Chrome and  Edge that validates news websites for users based on nine journalistic integrity criteria. Based on this evaluation, the site is given a positive or negative rating, green or red respectively. The plug-in has been downloaded thousands of times, and this technology powers information literacy programs in partnership with libraries and schools. 

It has also engaged in research initiatives and partnerships on disinformation, including support for research on disinformation and social media by Arizona State University, the Oxford Internet Institute, Princeton University's Center for Information Technology Policy, as well as Microsoft Research itself.

In a cross-sectoral collaboration, Microsoft, the Bill & Melinda Gates Foundation, and USAID supported the Technology and Social Change group at the University of Washington’s Information School to develop a program for Mobile Information Literacy that includes content verification, search, and evaluation. This project developed into a Mobile Information Literacy Curriculum which has since been applied in Kenya.

5. LINE

LINE, as with many other messaging apps, is sometimes taken advantage of by scammers, hoaxers, and fake news writers. While there have not been major claims of systematic disinformation on the platform, LINE has acknowledged issues of false information circulating on its networks. Fact-checkers have developed partnerships with the platform in order to prevent the spread of disinformation, including the CoFacts automated fact-checking system maintained by g0v (pronounced gov zero), a civic technology community in Taiwan. Users can add the Fact Line Checker to their contacts and forward messages to the checker and receive an answer in real time about whether the content is true or false. This also serves to automatically report suspicious messages to the platform, which allows Line to track misinformation and disinformation without breaking end-to-end encryption.

In September 2019, LINE launched an anti-hoax campaign in partnership with the Associated Press. This campaign includes a series of educational videos focused on identifying credible news sources and fake news. In a press release LINE said, “Taking 'Stop Fake News' as the theme, the campaign aims to help users improve their media literacy and create a safe digital environment.”

Highlight


In 2018, a group of international civil society organizations, including IFES, IRI, NDI, and International IDEA, formed the Design 4 Democracy Coalition to promote coordination among democracy organizations and provide a space for constructive engagement between the democracy community and technology companies.

B. Cross-Sector and Multistakeholder Initiatives

Increasingly, the major platforms are looking for broader ways to collaborate with civil society, governments, and others to not only combat disinformation, hate speech, and other harmful forms of content on their networks, but also promote better forms of content. These collaborations come in the form of coalitions with different groups, codes of practice, and other joint initiatives.

Facebook, Twitter and other major platforms have, for example, increasingly engaged with research groups such as the Atlantic Council's Digital Forensic Research (DFR) Lab, Graphika, and others to identify and take down large networks of false or coordinating accounts that are in violation of community standards. In addition, local groups such as International Society for Fair Elections and Democracy (ISFED) have also assisted social media platforms with information to facilitate take downs and other enforcement actions. Local organizations are becoming an increasingly important component of the reporting system for various platforms that do not have the capacity to actively monitor and understand local contexts like Georgia.  

Among more formal collaborations, the Global Network Initiative (GNI) dates back to 2005 and continues to support multi-stakeholder engagement among platforms and civil society, particularly on issues related to disinformation and other harmful forms of content. For more information on the GNI, see the norms and standards chapter.

Among the cross-sector initiatives to combat disinformation, one of the most prominent is the European Union's Code of Practice on Disinformation. The code was developed by a European Union (EU) working group on disinformation . The code supplies member governments and countries that want to trade and work with the bloc guidelines about how to run their regulatory frameworks in line with GDPR and other online EU regulations, as well as plans for responses to disinformation through digital literacy, fact-checking, media, and support for civil society, among other interventions. Based on this code, the EU has developed a Democracy Action Plan, an initiative that the EU plans to implement in the coming year that focuses on promoting free and fair elections, strengthening media freedom, and countering disinformation.  Core to its the disinformation efforts are:

  • Improving the EU’s existing toolbox for countering foreign interference 
  • Overhauling the Code of Practice on Disinformation into a co-regulatory framework of obligations and accountability of online platforms
  • Setting up a robust framework for Code of Practice implementation. 

At the Internet Governance Forum held at UNESCO in Paris and the Paris Peace Forum in November 2018, the President of the French Republic, Emmanuel Macron, introduced The Paris Call for Trust and Security in Cyberspace. Signatories to the Call commit to promoting nine core principles and reaffirm various commitments related to international law, cybersecurity, infrastructure protection, and countering disinformation. So far, 79 countries, 35 public authorities, 391 organizations of civil society, and 705 companies and private sector entities have signed on to a common set of principles on stability and security in the information space. The United States has yet to formally commit or sign on to the initiative.  Nevertheless, the initiative represents one of the most ambitious cross-sector collaborations dedicated to cybersecurity and information integrity to date. 

Footnotes

2. Interview by Daniel Arnaudo (National Democratic Institute) with Baybars Orsek, Poytner Institute. July 2, 2020.

3. Interview by Daniel Arnaudo (National Democratic Institute) with Pablo Medina, ColombiaCheck Director, February 18, 2020.

4. See Zeiter, Kirsten, Sandra Pepera, and Molly Middlehurst. “Tweets That Chill: Analyzing Online Violence Against Women In Politics.” Washington D.C.: National Democratic Institute, May 2019. https://www.ndi.org/tweets-that-chill , “Violence Against Women in Elections Online: A Social Media Analysis Tool.” CEPPS. Accessed March 22, 2021. https://www.ifes.org/publications/violence-against-women-elections-online-social-media-analysis-tool.