Complete Document - Platforms

Written by Vera Zakem, Senior Technology and Policy Advisor at the Institute for Security and Technology and CEO Zakem Global Strategies, Kip Wainscott, Senior Advisor for the National Democratic Institute, and Daniel Arnaudo, Advisor for Information Strategies at the National Democratic Institute

 

Digital platforms have become prominent resources for sharing political information, organizing communities, and communicating on matters of public concern. However, these platforms have undertaken a mix of responses and approaches to counter the growing prevalence of disinformation and misinformation affecting the information ecosystem. With a broad spectrum of communities struggling to mitigate the harmful effects of disinformation, hate speech coordinated influence operations, and related forms of harmful content, the private sector’s access to privileged and proprietary data and metadata often uniquely positions them to understand these challenges.

A number of prominent social media companies and messaging platforms are leveraging their abundant data to help inform responses to disinformation and misinformation campaigns. These responses vary widely in character and efficacy, but can generally be characterized as falling into one of the following three categories:

  1. policies, product interventions, enforcement measures to limit the spread of disinformation;

  2. policies and product features to provide users with greater access to authoritative information, data, or context; and

  3. efforts to promote a stronger community response and societal resilience, including digital literacy and internet access, to disinformation and misinformation

Many platforms have implemented new policies or changes in the enforcement of previously implemented policies in response to disinformation related to the COVID-19 pandemic, the 2020 U.S. Presidential Election, and the January 6th assault on the U.S. Capitol. With an increase of false information related to COVID-19, fact-checking increased 900 percent from January to March 2020, according to an Oxford University study.  The World Health Organization has characterized this spread of misinformation about COVID-19 as an “infodemic,” which incidentally occurred at a time of increased social media use as many people have been restricted to their homes during the pandemic. 

In addition, the 2020 U.S. presidential election inspired major platforms to update their policies related to coordinated inauthentic behavior, manipulated media, and disinformation campaigns targeting voters and candidates. Similarly, the January 6th attack on the U.S. Capitol has motivated social media platforms to once again reexamine and update their policies, and their enforcement, as it relates to disinformation and the potential risks of offline violence. 

This chapter examines platform responses in greater detail in order to provide a foundational understanding of steps social media platforms and encryption services use to address disinformation. Across all of these various approaches, it is important to note that social media policies and enforcement actions are constantly evolving as the threat landscape constantly changes. To better understand these changes, most prominent platforms, including Twitter, YouTube, and Facebook, publish regularly transparency reports that provide data to users, policymakers, peers, and civil society stakeholders about how these platforms update their policies, their enforcement strategies, and product features to respond to the dynamic threat landscape and societal challenges. To help account for these ever-moving dynamics, and as highlighted throughout this chapter, platforms have in many cases partnered with local groups, civil society organizations, media, academics, and other researchers to design responses to these challenges in the online space. Given the evolving nature of the threat landscape, the relevant policy, product, and enforcement actions, based on the information available as of the publication of this guide, are documented here.  

 

In recognition of disinformation’s prevalence and potential to cause harm, many social media platforms have taken actions to limit, remove, or combat both disinformation and misinformation in various ways. These responses are typically informed by platforms’ policies regarding content and behavior and mostly operationalized by product features and technical or human intervention. This section examines the variety of approaches companies have taken to address digital disinformation on their platforms.

Sometimes the moderation of content on social media comes under the pretense of combating disinformation when actually it is in service of illiberal government objectives. It is important to note that platforms controlled by companies in authoritarian countries often remove disinformation and other harmful content This can raise important censorship concerns particularly when the harm is defined as criticism directed toward the government under which the company operates. 

 

(A). Platform Policies on Disinformation and Misinformation

A handful of the world’s largest and most popular social media companies have developed policies and community standards to address disinformation and misinformation.  This section examines some of the most significant private sector policy responses to disinformation, including from Facebook, Twitter, and YouTube, as well as growing companies such as Tik Tok and others.

1. Facebook Policies

At Facebook, user activities are governed by a set of policies known as Community Standards. This set of rules does not presently ban disinformation or misinformation in general terms; however, it does feature several prohibitions that may apply to countering disinformation and misinformation in specific contexts. For example, the Community Standards prohibit content that misrepresents information about voting or elections, incites violence, promotes hate speech, or includes misinformation related to the Covid-19 pandemic. Also, the Community Standards prohibit “Coordinated Inauthentic Behavior,” which is defined to generally prohibit activities that are characteristic of large-scale information operations on the platform. Once detected, networks participating in coordinated inauthentic behavior are removed. Also, Facebook has begun to develop policies, engaging with experts, and developing technology to increase the safety of women on its platform and their family of apps. Rules against harassment, unwanted messaging, and non-consensual intimate imagery that is disproportionally targeted towards women are all part of Facebook’s efforts to make women feel safer. However more work still needs to be done, as the burden often falls on women to report abuse and manage their safety on the platform.

Outside of these specific contexts, Facebook’s Community Standards include an acknowledgment that while disinformation is not inherently prohibited, the company has a responsibility to reduce the spread of “false news.” In operationalizing this responsibility, Facebook commits to algorithmically reduce (or down-rank) the distribution of such content, in addition to other steps to mitigate its impact and disincentivize its spread. The company has also developed a policy of removing particular categories of manipulated media that may mislead users; however, the policy is limited in scope. It extends only to media that is the product of artificial intelligence or machine learning and includes an allowance for any media deemed to be satire or content that edits, omits, or changes the order of words that were actually said.

It is worth recognizing that while Facebook’s policies generally apply to all users, the company notes that “[i]n some cases, we allow content which would otherwise go against our Community Standards – if it is newsworthy and in the public interest.” The company has further indicated that speech by politicians will generally be treated as within the scope of this newsworthiness exception, and therefore not subject to removal. Such posts are, however, subject to labeling that indicates that the posts violate the Community Standards.  In recent years, Facebook has taken steps to remove political speech and deplatform politicians, including former President Donald Trump in the wake of the January 6th attack on the U.S. Capitol. Following the attacks, Facebook made the decision to remove President Donald Trump’s account from the platform indefinitely. The Oversight Board upheld the deciison, but criticized the open ended nature of the suspension, so Facebook limited the suspension to two yearsIn 2018, Facebook also de-platformed Min Aung Hlaing and senior Myanmar military leaders for conducting disinformation campaigns and inciting ethnic violence. 

2. Twitter Policies

The Twitter Rules govern permissible content on Twitter, and while there is no general policy on misinformation, the Rules do include several provisions to address false or misleading content and behavior in specific contexts. Twitter’s policies prohibit disinformation and other content that may suppress participation or mislead people about when, where, or how to participate in a civic process; content that includes hate speech or incites violence or harassment; or content that goes directly against guidance from authoritative sources of global and local public health information. Twitter also prohibits inauthentic behavior and spam, which is an element of information operations that makes use of disinformation and other forms of manipulative content. Related to disinformation, Twitter has updated its hateful conduct policy to prohibit language that dehumanizes people on the basis of race, ethnicity, and national origin.

Following public consultation, Twitter has also adopted a policy regarding sharing synthetic or manipulated media that may mislead users. The policy requires an evaluation of three elements, including whether (1) the media itself is manipulated (or synthetic); (2) the media is being shared in a deceptive or misleading manner; and (3) the content risks causing serious harm (including users’ physical safety, the risk of mass violence or civil unrest, and any threats to the privacy or ability of a person or group to freely express themselves or participate in civic events). If all three elements of the policy are met, including a determination that the content is likely to cause serious harm, Twitter will remove the content. If only some of the elements are met, Twitter may label the manipulated content, warn users who try to share it or attach links to trusted fact-checking content to provide additional context.

In the context of electoral and political disinformation, Twitter's policies on elections explicitly prohibit misleading information about the voting process. Its rules note "You may not use Twitter’s services for the purpose of manipulating or interfering in elections or other civic processes. This includes posting or sharing content that may suppress participation or mislead people about when, where, or how to participate in a civic process." However, inaccurate statements about an elected or appointed official, candidate, or political party are excluded from this policy. Under these rules, Twitter has removed postings that feature disinformation about election processes, such as promoting the wrong voting day or false information about polling places––addressing content that EMBs election observers and others are increasingly working to monitor and report. It is notable that tweets from elected officials and politicians may be subject to Twitter’s public interest intersectional. 

Under their public interest exception, Twitter notes that it “may choose to leave up a Tweet from an elected or government official that would otherwise be taken down” and cites the public’s interest in knowing about officials’ actions and statements. When this exception applies, rather than remove the offending content, Twitter will “place it behind a notice providing context about the rule violation that allows people to click through to see the Tweet.” The company has identified criteria for determining when a Tweet that violates its policies may be subject to this public interest exception, which includes: 

  1. The Tweet violated one or more of the platform’s rules 
  2. Was posted by a verified account 
  3. The account has more than 100,000 followers
  4. The account represents a current or potential member of a governmental or legislative body:
    1. Current holders of an elected or appointed leadership position in a governmental or legislative body, OR
    2. Candidates or nominees for political office, OR
    3. Registered political parties

In considering how to apply this exception, the company has also developed and published a set of protocols for weighing the potential risk and severity of harm against the public-interest value of the Tweet. During the 2020 U.S. Presidential Election cycle, Twitter applied the public interest intersectional notice to many of former President Donald Trump’s Tweets. Tweets that fall under this label display the following warning as shown below:

Image

Following the January 6th attack on the U.S. Capitol and Tweets that showed incitement of violence, Twitter de-platformed former President Trump. The company published a blog with its rationale. 

Finally, it is also noteworthy that Twitter has demonstrated a willingness to develop policies in response to specific topics that present a risk of polluting the platform’s information environment. For example, the company has implemented a special set of policies for removing content related to the QAnon conspiracy theory and accounts that promulgate it. Since the start of the COVID-19 pandemic, Twitter has also developed policies to counter COVID-related disinformation and misinformation, including COVID-19 misleading information policy that impacts health and public safety.

3. YouTube Policies

YouTube follows a three-strike policy that results in the suspension or termination of offending accounts related to disinformation. YouTube policies include several provisions relevant to disinformation in particular contexts, including content that aims to mislead voters about the time, place, means, or eligibility requirements for voting or participating in a census; that advances false claims related to the eligibility requirements for political candidates to run for office and elected government officials to serve in office; or promotes violence or hatred against or harasses individuals or groups based on intrinsic attributes. In addition, YouTube has also expanded its anti-harassment policy that prohibits video creators from using hate speech and insults on the basis of gender, sexual orientation, or race.

Like other platforms, the rules also include a specific policy against disinformation regarding public health or medical information in the context of the COVID-19 pandemic. As misleading YouTube videos about the coronavirus gained 62 million views in just the first few months of the pandemic, YouTube indicated it removed “thousands and thousands” of videos spreading misinformation in violation of the platform’s policies. The platform reiterated its commitment to stopping the spread of such harmful content. 

YouTube has also developed a policy regarding manipulated media, which prohibits content that has been technically manipulated or doctored in a way that misleads users (beyond clips taken out of context) and may pose a serious risk of egregious harm. To further mitigate risks of manipulation or disinformation campaigns, YouTube also has policies that prohibit account impersonation, misrepresenting one’s country of origin, or concealing association with a government actor. These policies also prohibit artificially increasing engagement metrics, either through the use of automatic systems or by serving up videos to unsuspecting viewers.

4. TikTok Policies

In January 2020, TikTok implemented the ability for users to flag content as misinformation by selecting their new ‘misleading information category.’ Owned by Chinese company ByteDance, TikTok has been overshadowed by privacy concerns, as Chinese regulation requires companies to comply with government requests to hand over data. In April 2020, TikTok released a statement regarding the company’s handling of personal information, noting their “adherence to globally recognized security control standards like NIST CSF, ISO 27001 and SOC2," goals towards more transparency, and limitations on the "number of employees who have access to user data.” 

While such privacy concerns have loomed large in public debates involving the platform, disinformation is also a challenge that the company has been navigating. In response to these issues, TikTok has implemented policies to prohibit misinformation that could cause harm to users, “including content that misleads people about elections or other civic processes, content distributed by disinformation campaigns, and health misinformation.” These policies apply to all TikTok users (irrespective of whether they are public figures), and they are enforced through a combination of content removals, account bans, and making it more difficult to find harmful content, like misinformation and conspiracy theories, in the platform’s recommendations or search features. TikTok established a moderation policy “which prohibits synthetic or manipulated content that misleads users by distorting the truth of events in a way that could cause harm.” This includes banning deepfakes in order to prevent the spread of disinformation.

5. Snapchat Policies

In January 2017, Snapchat created policies to combat the spread of disinformation for the first time. Snapchat implemented policies for its news providers on the platform’s Discover page in order to combat disinformation, as well as to regulate information that is considered inappropriate for minors. These new guidelines require news outlets to fact-check their articles before they can be displayed on the platform’s Discover page to prevent the spread of misleading information. 

In an op-ed, Snapchat CEO Evan Spiegel described the platform as different from other types of social media and many other platforms, saying “content designed to be shared by friends is not necessarily content designed to deliver accurate information.” The inherent difference between Snapchat and other platforms allows them to combat misinformation in a unique way. There isn’t a feed of information from users on Snapchat like there is with many other social media platforms––a distinction that makes Snapchat more comparable to a messaging app. With Snapchat’s updates, the platform makes use of human editors who monitor and regulate what is promoted on the Discover page, preventing the spread of false information.

In June 2020, Snapchat released a statement expressing solidarity with the Black community amid Black Lives Matter protests following the death of George Floyd. The platform said that it “may continue to allow divisive people to maintain an account on Snapchat, as long as the content that is published on Snapchat is consistent with our [Snapchat's] community guidelines, but we [Snapchat] will not promote that account or content in any way.” Snapchat also announced that it would no longer promote President Trump’s tweets on its Discover home page, citing “that his public comments of the site could incite violence.”

6. Vkontakte

While many of the largest platforms have adopted policies aimed at addressing disinformation, there are notable exceptions to this trend. For example, VKontakte is one of the most popular social media platforms in Russia and has been cited for its use to spread disinformation, particularly in Russian elections. The platform has also been cited for its use by Kremlin-backed groups to spread disinformation beyond Russia’s borders, impacting other countries’ elections, as in Ukraine. While the platform is frequently used as a means to spread disinformation, it does not appear that VKontakte is enforcing any policies to stop the spread of fake news. 

7. Parler

Parler was created in 2018 and has often been dubbed as the “alternative” to Twitter for conservative voices, largely due to its focus on freedom of speech with only de minimis content moderation policies. This unrestricted, unmoderated speech has led to a rise in anti-Semitism, hate, disinformation and propaganda, and extremism. Parler has been linked to the coordinated planning of the January 6th insurrection at the U.S. Capitol. In the aftermath of this event, multiple services including Amazon, Apple, and Google booted Parler from their platforms due to the lack of content moderation and a serious risk of public safety. This move demonstrates the ways in which the wider marketplace can apply pressure on specific platforms to implement policies to combat disinformation and other harmful content. After discussions with Amazon and Apple, Parler made changes to the app to better detect and monitor hate speech. Google announced they would allow Parler to return if they made changes to the app consistent with Google's policies, but as of September 2021, Parler has still not been added back to the Google Play store.

(B). Technical and Product Interventions to Curtail Disinformation

Private sector platforms have developed a number of product features and technical interventions intended to help limit the spread of disinformation, while balancing the interests of free expression. The design and implementation of these mechanisms are highly dependent on the nature and functionality of specific platforms. This section examines responses across these platforms, including traditional social media services, image, and video sharing platforms, and messaging applications. Of note, one of the biggest issues these platforms have tried and continue to address across the board is virality - the speed at which information travels on these platforms. When virality is combined with algorithmic bias, it can lead to coordinated disinformation campaigns, civil unrest, and violent harm.

1. Traditional Social Media Services

Two of the world’s largest social media companies, Facebook and Twitter, have implemented interventions and features that work either to suppress the virality of disinformation and alert users to its presence or create friction that impacts user behavior to slow the spread of false information within and across networks. 

At Twitter, product teams in 2020 began rolling out automated prompts that caution users against sharing links they have not themselves opened; this measure is intended to “promote informed discussion” and encourage users to evaluate information before sharing it. This follows the introduction of content labels and warnings, which the platform has affixed to Tweets that are not subject to removal under the platform’s policies (or under the company’s “public interest” exception, as described above) but which nonetheless may include misinformation or manipulated media. While these labels provide users with additional context (and are examined more thoroughly in the section of this chapter dedicated to such features), the labels themselves introduce a signal and potential friction that may impact a user’s decision to share or distribute misinformation. 

Facebook’s technical efforts to curtail disinformation include using algorithmic strategies to “down-rank” false or disputed information, decreasing the content’s visibility in the News Feed and reducing the extent to which the post may be encountered organically. The company also applies these distribution limits against entire pages and websites that repeatedly share false news. The company has also begun to employ notifications to users who have engaged with certain misinformation and disinformation in limited contexts, such as in connection with a particular election or regarding health misinformation related to the COVID-19 pandemic. Although this intervention is in limited use, the company says these notifications are part of an effort to “help friends and family avoid false information.”

Both Twitter and Facebook utilize automation for detecting certain types of misinformation and disinformation and enforcing content policies. These systems have played a more prevalent role during the pandemic as public health concerns have required human content moderators to disperse from offices. The companies similarly employ technical tools to assist in the detection of inauthentic activity on their platforms. While these efforts are not visible to users, the companies publicly disclose the fruits of these labors in periodic transparency reports which include data on account removals. These product features have been deployed globally, including in Sri Lanka, Myanmar, Nigeria, and other countries. 

Platforms that share videos and images have also integrated features into their products to limit the spread of false information. On Instagram, a Facebook company, the platform removes content identified as misinformation from its Explore page and from hashtags; the platform also makes accounts that repeatedly post misinformation harder to find by filtering content from that account from searchable pages. Examples of the ways Instagram has integrated curbing dis- and misinformation into their product development is shown below:

Image

TikTok uses technology to augment its content moderation practices, particularly to assist in identifying inauthentic behavior, patterns, and accounts dedicated to spreading misleading or spam content.  The company notes that its tools enforce their rules and make it more difficult to find harmful content, like misinformation and conspiracy theories, in the platform’s recommendations or search features.

To support enforcement of its policies, YouTube similarly employs technology, particularly machine learning, to augment its efforts. As the company notes among its policies, “machine learning is well-suited to detect patterns, which helps us to find content similar to other content we’ve already removed, even before it’s viewed.”

2. Messaging Applications

Messaging platforms have proven to be significant vectors for the proliferation of disinformation. The risks are particularly pronounced among closed, encrypted platforms, where companies are unable to monitor or review content. 

Despite the complexity of the disinformation challenge on closed platforms, WhatsApp in particular has been developing technical approaches to mitigate the risks. Following a violent episode in India linked to viral messages on the platform being forwarded to large groups of up to 256 users at a time, WhatsApp introduced limits on message forwarding in 2018 –– which prevent users from forwarding a message to more than five people –– as well as visual indicators to ensure that users can distinguish between forwarded messages and original content. More recently, in the context of the COVID-19 pandemic, WhatsApp further limited forwarding by announcing that messages that have been forwarded more than five times can subsequently only be shared with one chat at a time. While the encrypted nature of the platform makes it difficult to assess the impact of these restrictions on disinformation specifically, the company reports that the limitations have reduced the spread of forwarded messages by 70%.

In addition to restricting forwarding behavior on the platform, WhatsApp has also developed systems for identifying and taking down automated accounts that send high volumes of messages. The platform is experimenting with methods to detect patterns in messages through homomorphic encryption evaluation practices. These strategies may help to inform analysis and technical interventions related to disinformation campaigns in the future. 

WhatsApp, owned by Facebook, is especially seeking to combat misinformation about COVID-19 as such content continues to go viral. Efforts by the company have helped stop the spread of COVID related dis and misinformation. WhatsApp has created a WHO Health Alert chatbot to provide accurate information about COVID-19. Users can text a phone number to access the chatbot. The chatbot provides basic information initially and allows users to ask questions on topics, including latest numbers, protection, mythbusters, travel advice and current news. This allows users to obtain accurate information and get direct answers to their questions. WhatsApp has provided information through this service to over one million users.

3. Search Engines

Google has implemented technical efforts to promote information integrity in search. Google changed its search algorithm to combat fake news dissemination and conspiracy theories. In a blog post, Google Vice President of Engineering Ben Gomes wrote that the company will “help surface more authoritative pages and demote low-quality content” in searches. In an effort to provide improved search guidelines, Google is adding real people to act as evaluators to “assess the quality of Google’s search results—give us feedback on our experiments.” Google will also provide “direct feedback tools” to allow users to flag unhelpful, sensitive, or inappropriate content that appears in their searches. 

 

As private-sector technology platforms confront the issue of disinformation and misinformation across their services, one common strategy has been to provide users with greater access to authoritative information and contextualizing data. These strategies have, to date, included labeling content that may be misleading or harmful to users, directing users to official information sources on important topics like voting or public health, and providing researchers and civil society observers with access to tools and data to better understand the information environment across various digital services.

1. Facebook

Facebook has implemented a number of initiatives to improve access to data and authoritative information both for users and researchers. In the context of elections, for example, the company has introduced information labels that affix to any user content referencing “ballots” or “voting” (irrespective of the content’s veracity). The labels direct users to official voting information and build on related efforts from different international contexts. For example, in Colombia during the election and peace process, Facebook created an Informed Voter button and Election Day reminders, which helped to spread credible information about the election process. In preparation for the 2019 local elections in Colombia, Facebook partnered with Colombia’s National Electoral Council (CNE) to provide credible information about voting to citizens by including a voter button and including a reminder about voting. The informed voter button, as in other contexts, redirected the user to the local election authority for voter information about where and when they could vote.  Below, is an example of the voter information available on Facebook.

Image

Facebook has also begun labeling certain state-controlled media to provide greater transparency on the sources of information on the platform. These labels currently appear on the platform’s ad libraries and on Pages and will eventually be expanded to be more widely visible. The labels build on transparency features already in operation on Facebook Pages, which include panels that provide context on how the page is being administered (including information about the users who manage the page and the countries from which they are operating), as well information about whether the page is state-controlled. It has expanded to include labels on state-sponsored media, a practice that was replicated by Twitter in August 2020 and also included political and media figures.

In response to the pandemic, in April 2020, Facebook announced that it will tell users if they have been exposed to misinformation about COVID-19 and will direct users who have engaged with the misinformation to a website by the World Health Organization that debunks COVID-19 myths. Facebook is also removing COVID-19 misinformation from the platform, based on guidance from public health authorities. Facebook created a Coronavirus Information Center, which contains information about the virus from public health officials and local leaders. Through these efforts, Facebook and Instagram have directed over 2 billion people to reliable health information. The graphic below highlights Facebook’s efforts to educate consumers about COVID-19 disinformation.

Image

In support of research and analysis on the platform, Facebook enables greater access to data through its Crowdtangle application, which allows users to track pages and articles through a dashboard and downloadable datasets. Crowdtangle is becoming available for academics and other researchers more openly. Crowtangle also has an open plug-in for Chrome that allows users to understand the reach of articles throughout Facebook, Instagram, Reddit, and Twitter.  

In addition, as advertising is a common vector for the spread of political and other forms of disinformation, Facebook has expanded access to advertising information through various ads databases and archives. This includes access to the Ad Library API for researchers and those with Facebook developer accounts to study data about how ads are used in real time and to prevent misuse of the platform through targeted ads. The API allows researchers to access Facebook's dataset of content more directly through an automated system and creates a comprehensive mechanism for data collection and analysis. 

2. WhatsApp

As an encrypted messaging platform, WhatsApp has only limited information available to users and researchers about activities on its services. However, WhatsApp has supported access to its API in order to support certain research initiatives. The company has expanded API access through the Zendesk system––particularly for groups connected to the First Draft Coalition, such as Comprova in Brazil and CrossCheck in Nigeria.  This approach has been utilized to collect data on political events, the spread of false information and hate speech, and other research goals. The International Fact-Checking Network1 has also developed a collaboration with WhatsApp that includes access to the API for certain kinds of research, including an initiative launched in 2020 to combat misinformation associated with the COVID-19 pandemic.

3. Twitter

Twitter has developed a number of policies, campaigns, and product features with the goal of providing users with access to credible and authoritative information. In 2020, Twitter undertook substantial efforts to provide users with access to credible information about elections, including the U.S. Presidential Election, so that users could reliably access accurate information on voting and the integrity of the election results. These efforts also included additional product features and enhancements to prevent users from sharing misleading information about voting.  

Similarly, in connection with the COVID-19 pandemic, Twitter made robust investments in ensuring users find reliable and credible public health information. For example, a #KnowTheFacts search prompt has been translated in multiple languages and helps users find local, credible information and links to organizations that are working to curb COVID-19 threats. The company also updated its approach to address and contextualize misleading information about COVID-19 on its platform. For instance, Twitter announced in May 2020 that the company would label misleading tweets about COVID-19 “to provide additional explanations or clarifications in situations where the risks of harm associated with a Tweet are less severe but where people may still be confused or misled by the content.” Twitter has also provided access to its API for researchers and academics to further study the public conversation surrounding COVID-19 in real time. 

More broadly, Twitter has partnered with UNESCO and the OAS on guides to improve media literacy, as well as organizations around the world. Following its decision to prohibit political advertising, Twitter announced it was deprecating its Ad Transparency Center.

Below are examples of Twitter’s approach to providing credible information to users:

ImageImage

Colombia’s Election Management Body, the National Electoral Council (Consejo Nacional Electoral or CNE), has worked with Facebook and Twitter to actively promote access to credible information about the election, monitor disinformation, and provide enhanced product features informing, reminding, and educating voters about voting. The CNE has signed MOUs with both companies and worked actively to train its officials on monitoring online platforms and reporting content. CNE also  worked in coordination with the companies during the 2019 local elections to train staff on the use of Twitter tools such as Tools, Periscope, Moments, Twitter Mirror, Q&A, Tweetdeck, and other best practices. CNE also helped these platforms set up an automated account to respond to elections queries, a hashtag, and enabled communications such as live videos throughout the election process. For CNE, partnering with Facebook and Twitter has been especially important, given that disinformation affects all people, including marginalized communities such as women, LGBTQ persons, and others.

4. Google

Google’s “knowledge panels” are boxes of information that appear when users search for people, places, things, and organizations that are in the “knowledge graph.” These panels automatically generate boxes of information that provide a snapshot of information on a particular topic. While knowledge panels were created to provide information and address misinformation and fake news, they have been the cause of magnifying some disinformation. Example of a knowledge panel is depicted below:

Image

 

5. YouTube

In order to provide users with accurate information, YouTube provides “Breaking News” and “Top News” features, which elevate information from verified news sources. As part of the company’s ongoing efforts, YouTube has indicated that it is expanding the use of “information panels” to provide users with additional context from fact-checkers.

Youtube has also worked to label certain content during the COVID- 19 pandemic as questionable and has taken down content that was verifiably misleading, particularly related to the Plandemic video, which went viral in March 2020 as COVID-19  began spreading rapidly.

6. TikTok

To promote authoritative  COVID-19 information in response to the spread of disinformation, TikTok has partnered with the World Health Organization (WHO) to create a landing page for the organization to provide accurate facts, safety information, Q&As, and informational videos about the pandemic. This partnership allows the WHO to provide information to a younger audience than many other social media platforms. TikTok’s Head of Product said this partnership has allowed the platform to act “globally and comprehensively” to provide accurate information to its users. TikTok also revised its guidelines to denounce misleading information and flag inaccurate content.

7. Snapchat

To boost authoritative COVID-19 information, Snapchat implemented filters within its platform that feature verified information on reducing the risk of contracting COVID-19. While the platform allows independent creators to make filters, it will not allow misinformation to be included in them. Snapchat also announced the launch of a health and wellness initiative in response to user concern about COVID-19. The “Here for You” tool in the search bar will allow users to access information about mental health as well as information directly from the WHO, the CDC, the Crisis Text Line, the Ad Council, and the National Health Service.

 

Collective action, community partnerships, and civil society engagement are important aspects of the private sector approach to addressing disinformation. These include individual companies’ investments, engagement, and partnerships, as well as collaborative initiatives involving multiple companies. This section examines partnerships and initiatives undertaken by particular companies, as well as cross-sectoral and multi-stakeholder collaborations to combat disinformation.

A. Company Partnerships and Initiatives

All major technology companies, such as Facebook, Google, and Twitter, have collaborated with civil society and others to combat disinformation, hate speech, and other harmful forms of content on their platforms. This section reviews some of the key initiatives they have undertaken to work with outside groups, particularly civil society organizations, on information space problems collectively.

1. Facebook 

Facebook has developed a number of public-facing partnerships and initiatives aimed at supporting civil society and other stakeholders working to promote information integrity.  Among its most notable announcements, Facebook has inaugurated an independent Oversight Board.  The Board is composed of technology, human rights, and policy experts who have been given the authority to review difficult cases of speech that cause online harassment, hate, and spread disinformation and misinformation. As of the date of publishing this guidebook, the Oversight Board has reviewed and made a determination on content moderation cases, including cases in China, Brazil, Malaysia, and the United States. This is significant, as the oversight board takes into account human rights, legal, and impact on society in reviewing difficult cases the platform may not be in the position to address. 

The company has also invested in country-specific and regional initiatives. For example, WeThink Digital is a Facebook initiative to foster digital literacy through partnerships with civil society organizations, academia, and government agencies in various Asia-Pacific countries such as Indonesia, Myanmar, New Zealand, the Philippines, Sri Lanka, and Thailand. It includes public guides to user actions such as deactivating an account, digital learning modules, videos, and other pedagogical resources. In the context of elections, in particular, Facebook has also developed partnerships with election monitoring bodies, law enforcement, and other government institutions dedicated to the investigation of campaigns during electoral processes through the creation of a “war room” of dedicated staff in certain cases, such as the European Union, Ukraine, Ireland, Singapore, Brazil, and for the 2020 U.S. electionwhich they have since closed. According to the NDI case study on the role of social media platforms in enforcing policy decisions during elections, both Facebook and Twitter worked with the National Electoral Council (CNE) in Colombia during the electoral process. 

In some countries, Facebook is partnering with third-party fact-checkers to review and rate the accuracy of articles and posts on the platform. As part of these efforts, in countries such as Colombia, Indonesia, Ukraine, as well as various members of the EU and the United States, Facebook has commissioned groups––through what is described as “a thorough and rigorous application process" established by the IFCN2––to become trusted fact-checkers who vet content, provide input into the algorithms that define the news feed, and downgrade and flag content that is identified as false. In Colombia, for example––where partners include AFP Colombia, ColombiaCheck, and La Silla Vacia––a representative from one of these partners reflected on the value of working with Facebook and platform's more broadly: "I think the most important thing is to talk more closely with other platforms because the way to widen our reach is to work with them. Facebook has its problems but it reaches a lot of people and especially reaches the people that have shared false information, and if  we could do something like that with Twitter, Instagram, or WhatsApp it would be great; that is the ideal next step for me."3 Groups from more than 80 countries have partnered with Facebook in this way, underscoring the broad scope of this effort. 

 

Highlight


In Focus: Facebook’s Social Science One Engagement

Facebook has supported the development of Social Science One, a consortium of universities and research groups that have been working to understand various aspects of the online world, including disinformation, hate speech, and computational propaganda. This is also supported by foundations including the John and Laura Arnold Foundation, the Democracy Fund, the William and Flora Hewlett Foundation, the John S. and James L. Knight Foundation, the Charles Koch Foundation, the Omidyar Network, the Sloan Foundation, and Children’s Investment Fund Foundation. The project was announced and launched in July 2018. Notably, all but three of the projects are focused on the developed world, and of those three, two projects are in Chile and one in Brazil. Through this consortium, the platform has enabled access to a URLs Data Set of widely shared links that is otherwise unavailable to the wider research community.


Facebook received criticism on the program because of the slow speed of implementation, the release and management of research data, and the negotiation of other complicated issues.  In all aspects of collaboration with platforms, agreement on data sharing and management are critical components of projects and certainly must be negotiated carefully to avoid sharing private user information. The misuse of such data as happened during the Cambridge Analytica scandal should be avoided. The data was later used by private companies to model voter behavior and target advertisements with psychographic and other information from these profiles, creating huge questions about the use of private user data in campaigns and elections. It is important to highlight that the history of this project helped set the terms for research collaboration with Facebook going forward.

2. WhatsApp

While it is a closed platform, WhatsApp has supported researchers in developing studies of its platform as one of the principal means of community engagement. The studies include an interesting range of potential methodology and show how enhanced access can lead to interesting and important results for understanding the closed platform, especially how it is used in lesser-seen or known contexts. Many countries and regions are a black box, especially at a local level. Groups are closed, the platform is encrypted, and it is difficult to see and understand anything in terms of content moderation. 

Abuse and online manipulation of WhatsApp through automated networks are common in many places. Local languages, dialects, and slang are not well known to moderators from different regions and countries. Violence against women online, in politics and elections, can have serious impacts on the political participation of the targeted individuals, we well as a chilling effect on the participation of women more broadly, and monitoring for hate speech should seek to understand methods of tracking local lexicons. The CEPPS partners have developed methodologies for tracking online hate speech against women and other marginalized groups, such as IFES's Violence Against Women in Elections (VAWIE) framework, or NDI's Votes without Violence Toolkit and the Addressing Online Misogyny and Gendered Disinformation: A How-To Guide, as well as a social media analysis tool developed jointly through CEPPS that describes methodologies for building lexicons in local contexts4.

In many cases, there simply are not enough resources to hire even minimal levels of moderators and technologists to deal with what is happening. This creates issues for content moderation, reporting, and algorithmic forms of detection and machine learning to inform these systems.  In many cases, moderation efforts are up against information attacks and coordinated inauthentic behavior that go beyond ordinary manipulation and can be sponsored by private or public authorities with deep pockets. In Brazil, WhatsApp's program supported studies of its election from top researchers in the field. Researchers at the Universities of Syracuse and Minas Gerais studied user information sharing and compared it to voter behavior, while others from the Institute of Technology and Society in Rio looked at methods for training people in media literacy through the platform.

WhatsApp has supported research on the platform and enabled access to its business API in certain cases, such as the First Draft/Comprova project in Brazil. It has also financially supported groups such as the Center for Democracy and Development and the University of Birmingham to pioneer research on the platform in Nigeria.

3. Twitter

Twitter has taken a more comprehensive approach to releasing data than any other company. Since 2018, the company has made available comprehensive datasets of state-linked information operations that it has removed. Rather than providing samples or access to only a small number of researchers, Twitter established a public archive of all Tweets and related content that it has removed. The archive now runs into hundreds of millions of Tweets and several terabytes of media. 

This archive has enabled a wide range of independent research, as well as collaboration with expert organizations. In 2020, the company partnered with the Carnegie Partnership for Countering Influence Operations (PCIO) to co-host a series of virtual workshops to support an open exchange of ideas among the research community regarding how IO can be better understood, analyzed, and mitigated. Twitter’s API is a unique source of data for the academic community, and the company launched a dedicated academic API product in 2021. 

More broadly, Twitter collaborates frequently with and has provided grants to support a number of organizations working to promote information integrity. Just like Facebook, the company has worked closely with research partners like the Stanford Internet Observatory, Graphika, and the Atlantic Council Digital Forensic Research Lab on datasets related to the networks detected and removed from their platform.  The platform has also collaborated with the Oxford Internet Institute’s Computational Propaganda Project to analyze information operation activities.

 

4. Microsoft

Microsoft has initiated the Defending Democracy Program, partnering with various civil society, private sector, and academic groups working on cybersecurity, disinformation, and civic technology issues. As part of this initiative, starting in 2018, Microsoft partnered with Newsguard, a plug-in to browsers such as Chrome and  Edge that validates news websites for users based on nine journalistic integrity criteria. Based on this evaluation, the site is given a positive or negative rating, green or red respectively. The plug-in has been downloaded thousands of times, and this technology powers information literacy programs in partnership with libraries and schools. 

It has also engaged in research initiatives and partnerships on disinformation, including support for research on disinformation and social media by Arizona State University, the Oxford Internet Institute, Princeton University's Center for Information Technology Policy, as well as Microsoft Research itself.

In a cross-sectoral collaboration, Microsoft, the Bill & Melinda Gates Foundation, and USAID supported the Technology and Social Change group at the University of Washington’s Information School to develop a program for Mobile Information Literacy that includes content verification, search, and evaluation. This project developed into a Mobile Information Literacy Curriculum which has since been applied in Kenya.

5. LINE

LINE, as with many other messaging apps, is sometimes taken advantage of by scammers, hoaxers, and fake news writers. While there have not been major claims of systematic disinformation on the platform, LINE has acknowledged issues of false information circulating on its networks. Fact-checkers have developed partnerships with the platform in order to prevent the spread of disinformation, including the CoFacts automated fact-checking system maintained by g0v (pronounced gov zero), a civic technology community in Taiwan. Users can add the Fact Line Checker to their contacts and forward messages to the checker and receive an answer in real time about whether the content is true or false. This also serves to automatically report suspicious messages to the platform, which allows Line to track misinformation and disinformation without breaking end-to-end encryption.

In September 2019, LINE launched an anti-hoax campaign in partnership with the Associated Press. This campaign includes a series of educational videos focused on identifying credible news sources and fake news. In a press release LINE said, “Taking 'Stop Fake News' as the theme, the campaign aims to help users improve their media literacy and create a safe digital environment.”

Highlight


In 2018, a group of international civil society organizations, including IFES, IRI, NDI, and International IDEA, formed the Design 4 Democracy Coalition to promote coordination among democracy organizations and provide a space for constructive engagement between the democracy community and technology companies.

B. Cross-Sector and Multistakeholder Initiatives

Increasingly, the major platforms are looking for broader ways to collaborate with civil society, governments, and others to not only combat disinformation, hate speech, and other harmful forms of content on their networks, but also promote better forms of content. These collaborations come in the form of coalitions with different groups, codes of practice, and other joint initiatives.

Facebook, Twitter and other major platforms have, for example, increasingly engaged with research groups such as the Atlantic Council's Digital Forensic Research (DFR) Lab, Graphika, and others to identify and take down large networks of false or coordinating accounts that are in violation of community standards. In addition, local groups such as International Society for Fair Elections and Democracy (ISFED) have also assisted social media platforms with information to facilitate take downs and other enforcement actions. Local organizations are becoming an increasingly important component of the reporting system for various platforms that do not have the capacity to actively monitor and understand local contexts like Georgia.  

Among more formal collaborations, the Global Network Initiative (GNI) dates back to 2005 and continues to support multi-stakeholder engagement among platforms and civil society, particularly on issues related to disinformation and other harmful forms of content. For more information on the GNI, see the norms and standards chapter.

Among the cross-sector initiatives to combat disinformation, one of the most prominent is the European Union's Code of Practice on Disinformation. The code was developed by a European Union (EU) working group on disinformation . The code supplies member governments and countries that want to trade and work with the bloc guidelines about how to run their regulatory frameworks in line with GDPR and other online EU regulations, as well as plans for responses to disinformation through digital literacy, fact-checking, media, and support for civil society, among other interventions. Based on this code, the EU has developed a Democracy Action Plan, an initiative that the EU plans to implement in the coming year that focuses on promoting free and fair elections, strengthening media freedom, and countering disinformation.  Core to its the disinformation efforts are:

  • Improving the EU’s existing toolbox for countering foreign interference 
  • Overhauling the Code of Practice on Disinformation into a co-regulatory framework of obligations and accountability of online platforms
  • Setting up a robust framework for Code of Practice implementation. 

At the Internet Governance Forum held at UNESCO in Paris and the Paris Peace Forum in November 2018, the President of the French Republic, Emmanuel Macron, introduced The Paris Call for Trust and Security in Cyberspace. Signatories to the Call commit to promoting nine core principles and reaffirm various commitments related to international law, cybersecurity, infrastructure protection, and countering disinformation. So far, 79 countries, 35 public authorities, 391 organizations of civil society, and 705 companies and private sector entities have signed on to a common set of principles on stability and security in the information space. The United States has yet to formally commit or sign on to the initiative.  Nevertheless, the initiative represents one of the most ambitious cross-sector collaborations dedicated to cybersecurity and information integrity to date.