Developing Norms and Standards on Disinformation

Complete Document - Norms and Standards

Written by Daniel Arnaudo, Advisor for Information Strategies at the National Democratic Institute

 

Normative frameworks for the information space have developed over the course of many years, through collaborations between civil society groups, private sector companies, government, and other stakeholders. However, norms and standards specific to working on disinformation or social media issues are in embryonic stages: either existing initiatives are being revised to address the new online threats, for instance through content moderation, corporate governance, the digital agenda, and the cybersecurity space, or new ones dedicated specifically to disinformation and related social media issues are just forming. 

This section will examine how the different codes and principles in this space are evolving and how they can potentially link with existing best practices internationally, as well as ways that programs can be designed to link with these nascent frameworks. Some codes work organizationally, for instance how parties, private or public sector entities should behave to discourage the use and promotion of disinformation, computational propaganda, and other harmful forms of content while encouraging openness, freedom of expression, transparency, and other positive principles related to the integrity of the information space. Others work in terms of individual codes of practice such as for media monitors, fact-checkers, and researchers in the space. Both organizational and individual efforts will be considered in this section.

One way of understanding these normative frameworks for the information space is as a form of negotiation. For example, negotiation between technology companies and other groups (such as governments, advertisers, media, and communications professionals) in agreement on shared norms and standards across non-governmental organizations, media, and civil society that provide oversight and to a certain extent have powers of enforcement of these rules. Different stakeholders enter into different forms of agreement with the information technology and communications sectors depending on the issue agreed on, the principles involved, the means of oversight and safeguards, and ultimately the consequences of any abrogation or divergence from the terms. These standards also focus on the different vectors of information disorder, content, sources, and users. For example, content moderation normative standards such as the Santa Clara Principles, fact-checking principles focusing on both sources and content by the Poynter Institute's International Fact-Checking Network, or standards such as the EU Code on Disinformation that attempt to address all three: content through encouraging better moderation, sources by encouraging efforts to identify them, and users through media information literacy standards. 

Other actors, such as parties, policymakers, and the public sector, can work to ensure that norms related to online operations are enforced, with varying degrees of success. Ultimately, these normative frameworks are dependent on agreements between parties to abide by them, but other forms of oversight and enforcement are available to society. Also, the integration of inclusive gender-sensitive approaches to the development of norms and standards and reflecting how work to advance gender equality and social inclusion broadly and work to counter disinformation can and should be mutually reinforcing. Many of the frameworks address corporate stakeholders and the technology sector in particular, such as the Santa Clara Principles on Content Moderation, Ranking Digital Rights, and the Global Network Initiative, and the European Union's Codes of Practice on Disinformation and Hate Speech, while others engage with a broader range of groups, including civil society actors, government, media, and communications sectors. Other frameworks attempt to engage with parties themselves, to create codes of online conduct for candidates and campaigns, either through informal agreements or more explicit codes of conduct. Finally, normative frameworks can be used to ensure that actors working in fields related to disinformation issues promote information integrity, such as journalists and fact-checkers.

This section will cover these categories of normative interventions that address content, actors such as platforms, and the targets of disinformation, hate speech, computational propaganda, and other harmful forms of content, including: 

These frameworks all have elements that impact the information space, particularly around freedom of expression, privacy, and the inherent conflicts in creating open spaces for online conversation while also ensuring inclusion and penalties for hateful or other problematic content. They are also evolving and being adapted to the new challenges of an increasingly online, networked society that is confronted by disinformation, hate speech, and other harmful content. This guide will now review more detailed information and analysis of these approaches and potential models, as well as partner organizations, funders, and organizational mechanisms.

Many normative frameworks have developed to govern the online space, addressing issues related to traditional human rights concepts such as freedom of expression, privacy, and good governance. Some of these connect with building normative standards for the online space around disinformation to help promote information integrity but address different aspects of the Internet, technology, and network governance. The Global Network Initiative (GNI) is an older example, which formed in 2008 after two years of development, in an effort to encourage technology companies to respect the freedom of expression and privacy rights of users. The components link with information integrity principles, first by ensuring that the public sphere is open for freedom of expression, secondly by ensuring that user data is protected and not misused by malicious actors potentially to target them with disinformation, computational propaganda, or other forms of harmful content.

The GNI also serves as a mechanism for collective action among civil society organizations and other stakeholders in advocating for better-informed regulation around Information Communication Technologies (ICTs), including social media, to promote principles of freedom of expression and privacy. This includes advisory networks such as the Christchurch Call Network and Freedom Online Coalition, as well as participation in multi-sectoral, international bodies, focused on the issues related to online extremism and digital rights, such as those sponsored by the United Nations and Council of Europe.

Region Background
Global

The Global Network Initiative is an international coalition that seeks to harness collaboration with the technology companies to support The GNI Principles (“the Principles”) and Implementation Guidelines that provide an evolving framework for responsible company decision-making in support of freedom of expression and privacy rights. As our company participation expands, the Principles are taking root as the global standard for human rights in the ICT sector. The GNI also collectively advocates governments and international institutions for laws and policies that promote and protect freedom of expression and privacy for instance through instruments such as the International Covenant on Civil and Political Rights, and subsequently, the United Nations Guiding Principles on Business and Human Rights. It has assessed companies including Facebook, Google, LinkedIn, and Microsoft.

GNI Principles: 

  • Freedom of Expression
  • Privacy
  • Responsible Company Decision Making
  • Multi-Stakeholder Collaboration
  • Governance, Accountability, and Transparency

In October 2008, representatives of technology companies, civil society, socially responsible investors, and academia released the Global Network Initiative. After two years of discussions, they released a set of principles focused primarily on how companies that manage Internet technologies could ensure freedom of expression and privacy on their networks. They also established guidelines for the implementation of these principles. Tech companies with assets related to disinformation, social media, and the overall information space include Facebook, Google, and Microsoft. Representatives from civil society include the Center for Democracy and Technology, Internews, and Human Rights Watch, as well as representatives from the Global South such as the Colombian Karisma Foundation, and the Center for Internet and Society in India.

Every two years, the GNI publishes an assessment of the companies engaged in the initiative, gauging their adherence to the principles and their success in implementing aspects of them. The latest version was published in April 2020, covering 2018 and 2019. The principles related to freedom of expression are related to disinformation issues but focus more on companies allowing for freedom of expression rather than preventing the potential harms that come from malicious forms of content such as disinformation and hate speech.

These standards and the GNI have encouraged greater interaction between tech companies and representatives from academia, media, and civil society, and greater consultation on issues related to information integrity, particularly censorship and content moderation. For instance, a Fake News law in Brazil would require "traceability" of users, or registration with government documents within Facebook and other social networks wishing to operate in the country, so that they can be identified for sanction in the case that they are spreading disinformation. This would conflict with the GNI's privacy provisions that ensure users are allowed anonymous access to networks. The GNI released a statement calling out these issues and has advocated against the proposed law. This shows how this framework can be used for joint advocacy through a multi-stakeholder effort, although its efficacy is less clear.  Nonetheless, the GNI has helped form a foundation for other efforts that have since developed, including the Santa Clara Principles on Content Moderation and the EU Codes on Disinformation and Hate Speech that have focused more specifically on social media issues.

Other groups have focused on developing standards linking human rights and other online norms with democratic principles. The Luminate Group's Digital Democracy Charter, for example, created a list of rights and responsibilities for the digital media environment and politics. The DDC "seeks to build stronger societies through a reform agenda -- remove, reduce, signal, audit, privacy, compete, secure, educate, and inform." In a similar vein, the National Democratic Institute, supported in part by the CEPPS partners, has developed the Democratic Principles for the Information Space, which aim partly to address digital rights issues and counter harmful speech online through democratic standards for platform policies, content moderation, and products.

Region Background
Global

The Manilla Principles on Intermediary Liability 

  • Define various principles for intermediary companies to follow when operating in democratic and authoritarian environments, including that: Intermediaries should be shielded from liability for third-party content; Content must not be required to be restricted without an order by a judicial authority; Requests for restrictions of content must be clear, be unambiguous, and follow due process; Laws and content restriction orders and practices must comply with the tests of necessity and proportionality
  • Laws and content restriction policies and practices must respect due process; Transparency and accountability must be built into laws and content restriction policies and practices.

The Manilla Principles on Intermediary Liability were developed in 2014 by a group of organizations and experts focused on technology policy and law from around the world. Principle drafters include the Electronic Frontier Foundation, the Center for Internet and Society from India, KICTANET (Kenya), Derechos Digitales (Chile), and Open Net (South Korea) representing a wide range of technology perspectives and regions. They relate to questions of liability for content on networks that have arisen in the US and Europe around Section 230 of the Communications Decency Act of 1996 or Germany's Network Enforcement Act (NetzDG) of 2017. 

 

Manilla Principles on Intermediary Liability

1 Intermediaries should be shielded from liability for third-party content

2 Content must not be required to be restricted without an order by a judicial authority

3 Requests for restrictions of content must be clear, be unambiguous, and follow due process

4 Laws and content restriction orders and practices must comply with the tests of necessity and proportionality

5 Laws and content restriction policies and practices must respect due process

6 Transparency and accountability must be built into laws and content restriction policies and practices

They agreed upon basic standards holding that intermediaries like Facebook, Google, and Twitter, that host content or manage it in some way, should abide by basic democratic standards, while governments should also respect certain norms regarding regulations and other forms of control of content and networks. Their manifesto stated:

"All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association, and the right to privacy. With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.

Their principles follow, holding that intermediaries should have legal mechanisms that shield them from liability for the content that they host on their servers. This principle serves to provide for an open conversation and manageable systems of moderation. Secondly, in this vein, the principles assert that content should not be easily restricted without judicial orders, and these must be clear and follow due process. Thirdly, these orders and related practices should comply with tests for necessity and proportionality, or they should be reasonably necessary and proportional to the gravity of the crime or mistake. Finally, transparency and accountability for these laws should be built into any of these legal systems, so that all can see how they operate and are being applied. 

These systems and principles have provided a way for the signatories and other civil society organizations to evaluate how countries are managing online systems, and how platforms can manage their content and apply democratic norms to their own practices. Various organizations have signed on, ranging from media NGOs and organizations, human rights and policy groups, as well as civic technologists. This technical and geographic diversity gives these principles the backing and links to content creators, policymakers, providers, and infrastructure managers, from all over the world. They provide one practical means for organizations to work together to monitor and manage these policies and systems related to the information space and in certain cases lobby for changes in them.

"These principles were developed in the wake of a conference at Santa Clara University in 2018. At Santa Clara in 2018, we held the first-of-its-kind conference on content moderation at scale. Most [companies] had not disclosed at all what they were doing. Their policies were about content moderation and how they were applying them. So we co-organized the day-long conference and ahead of this conference a small subgroup of academics and activists organized by the Electronic Frontier Foundation met separately and had a whole conversation and it was out of that sort of side meeting that the Santa Clara principles arose." - Irina Racu, Director of the Internet Ethics Program at Santa Clara's Center for Applied Ethics1

Region Background
Global

The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large user bases. The principles include that:

  • Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  • Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  • Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

The Santa Clara Principles On Transparency and Accountability in Content Moderation developed as a means of assessing how companies are working to develop policies and systems governing the systems that keep track and organize the content that flows on them. Generally, they focus on ensuring that companies have policies that publicize the number of posts removed and accounts banned, provide notice to users when that is done, and provide systems for appeal. Irina Racu, the Director of the Internet Ethics Program at Santa Clara's Center for Applied Ethics, was one of the founders of the project and is a continuing member. She describes how it began:

"Once drafted, various companies signed on in support of them, including social media giants such as Facebook, Instagram, Reddit, and Twitter."

The principles are organized around three overarching themes: Numbers, Notice and Appeal. Under numbers, platforms agree that companies should keep track and inform the public on the numbers of posts that are reported and accounts that are suspended, blocked, or flagged in a regular report that is machine-readable. Secondly, in terms of the notice, users and others who are impacted by these policies should be notified of these takedowns or other forms of content moderation in open and transparent ways. These rules should be published and understood publicly by all users, regardless of background. If governments are involved, say to request a takedown, users should be apprised as well, but generally, those who report and manage these systems should have their anonymity maintained. Thirdly, there should be clearly defined processes of appeal for these decisions in place. Appeals should be reviewed and managed by humans, not machines, suggesting mechanisms that groups like the Facebook oversight board will attempt to build. However, the principles hold that these practices should be built into all content moderation, not only high-level systems. 

These principles have been applied in various ways to draw attention to how companies have developed content moderation systems. One notable application has been the Electronic Frontier Foundation's "Who Has Your Back" reports. These reports, released annually, rate companies on the basis of their adherence to the Santa Clara Principles while rating them directly on other metrics as well, such as transparency and notice to users. In their report, EFF notes that 12 of the 16 companies rated in 2019 endorsed the principles, suggesting that there is some buy-in for the concept. Companies like Reddit adhere to all of the principles, while others like Facebook or Twitter achieve only two or three. With many social media companies still falling short, and international or other new players entering the market, it will remain a challenging effort to apply globally.

As demonstrated by these preexisting examples, the private sector is one of the central components of the information ecosystem and has some internal guidelines and norms regulating how it is run. However, there are important normative frameworks that have both induced and encouraged compliance with global human rights and democratic frameworks, and specifically code focused on disinformation, hate speech, and related issues.

The companies that run large platforms in the information ecosystem, such as Facebook, Google, and Twitter, have a special responsibility for the internet's management and curation. There are certain normative frameworks, particularly within the European Union, that governments and civil society have developed to monitor, engage with, and potentially sanction tech companies. Their efficacy is based on a number of factors, including enforcement and oversight mechanisms in addition to more general threats from harmful media or general adherence to global human rights standards. 

The European Union is an important factor as it is a transnational body that has the power to define the conditions to operate in its market. This creates a greater incentive for companies to engage in cooperative frameworks with other private and public sectors as well as civil society actors in negotiation over their rights to operate on the continent. There is the implicit threat of regulation, for instance, the General Data Protection Regulation provides strong data protection that includes not only European citizens but also foreigners who are operating in the country or engaging in systems that are based within it. This implicit power to regulate ultimately provides a significant amount of normative and regulatory pressure on companies to comply if they want to engage in the European common market.  

This system creates powerful incentives and mechanisms for alignment with national law and transnational norms. These codes create some of the most powerful normative systems for enforcement around disinformation content, actors, and subjects anywhere in the world but have been challenged by difficulties in oversight and enforcement, while many of the principles would not be permissible in the U.S., particularly concerning potential first amendment infringements. The harmonization of these approaches internationally represents a key challenge in coming years, as various countries impose their own rules on the networks, platforms, and systems, influencing and contradicting each other.

 

Region Background
European Union

The European Union developed a Code of Practice on Disinformation based on the findings of its High-Level Working Group on the issue. This included recommendations for companies operating in the EU, suggestions for developing media literacy programs for members responding to the issues, and developing technology supporting the code.

The five central pillars of the code are:

  • enhance the transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  • promote media and information literacy to counter disinformation and help users navigate the digital media environment;
  • develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  • safeguard the diversity and sustainability of the European news media ecosystem, and
  • promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

The European Union's Code of Practice on Disinformation is one of the more multinational and well-resourced initiatives in practice currently, as it has the support of the entire bloc and of its member governments behind its framework. The Code was developed by a European Commission-mandated working group on disinformation and contains recommendations for companies and other organizations that want to operate in the European Union. In addition to the Code, the EU provides member governments and countries that want to trade and work with the bloc with guidelines on how to organize their companies online, as well as plan for responses to disinformation through digital literacy, fact-checking, media, and support for civil society, among other interventions.

The Code was formulated and informed chiefly by the European High-Level Expert Group on Fake News and Online Disinformation in March 2018.  The group, composed of representatives from academia, civil society, media, and technology sectors, composed a report that included five central recommendations that later became the five pillars under which Code is organized. They are:

  1. enhance the transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  2. promote media and information literacy to counter disinformation and help users navigate the digital media environment;
  3. develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  4. safeguard the diversity and sustainability of the European news media ecosystem, and
  5. promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

These principles were integrated into the Code, published in October 2018, roughly six months after the publication of the expert group's report. The European Union invited technology companies to sign on to the Code and many engaged, alongside other civil society stakeholders and EU institutions that worked to implement elements of these principles. Signatories included Facebook, Google, Microsoft, Mozilla, Twitter, as well as the European Association of Communication Agencies, and diverse communications and ad agencies. These groups committed not only to the principles, but to a series of annual reports on their progress in applying them, whether as communications professionals, advertising companies, or technology companies.

As participants in the initiative, the companies agree to a set of voluntary standards aimed at combating the spread of damaging fakes and falsehoods online and submit annual reports on their policies, products, and other initiatives to conform with its guidelines. The initiative has been a modest success in engaging platforms in dialogue with the EU around these issues and addressing them with members governments, other private sector actors, and citizens.

The annual reports of these companies and the overall assessment of the implementation of the Code of Practice on Disinformation review the progress that the code has made in its first year of existence, from October 2018-2019. The reports find that while the Code has generally made progress in imbuing certain aspects of its five central principles in the private sector signatories, it has been limited by its "self-regulatory nature, the lack of uniformity of implementation and the lack of clarity around its scope and some of the key concepts." 

An assessment from September 2020 found that the code had made modest progress but had fallen short in several ways, and provided recommendations for improvement. It notes that "[t]he information and findings set out in this assessment will support the Commission’s reflections on pertinent policy initiatives, including the European Democracy Action, as well as the Digital Services Act, which will aim to fix overarching rules applicable to all information society services." This helps describe how the Code on Disinformation fits within a larger program of European initiatives, linking with similar codes on hate speech moderation, related efforts to ensure user privacy, copyright protection, and cybersecurity, and broader efforts to promote democratic principles in the online space.

Other organizations have made independent assessments that offer their own perspective on the European Commission's project. The project commissioned a consulting firm, Valdani, Vicari, and Associates (VVA), to review the project as well, and it found that: 

  • "The Code of Practice should not be abandoned. It has established a common framework to tackle disinformation, its aims and activities are highly relevant and it has produced positive results. It constitutes a first and crucial step in the fight against disinformation and shows European leadership on an issue that is international in nature.
  • Some drawbacks related to its self-regulatory nature, the lack of uniformity of implementation and the lack of clarity around its scope and some of the key concepts.
  • The implementation of the Code should continue and its effectiveness could be strengthened by agreeing on terminology and definitions."

The Carnegie Endowment for International Peace completed an assessment in a similar period after the completion of its first year of implementation, published in March 2020. The author found that the EU had indeed made progress in areas such as media and information literacy, where several technology signatories have created programs for users on these concepts, such as Facebook, Google, and Twitter.

The EU Code of Practice on Disinformation’s normative framework follows similar, related examples that describe and develop a component of the European Union's position, namely the 2016 EU Code of Conduct on Countering Illegal Hate Speech. This 2016 EU Code of Conduct links with the earlier "Framework Decision 2008/913/JHA of 28 November 2008 combating certain forms and expressions of racism and xenophobia by means of criminal law" and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, color, religion, descent or national or ethnic origin." Alternatively, organizations such as the Center for Democracy and Technology have criticized the EU's approach and potential for misuse and abuse, particularly in regards to the code on hate speech. 

Overall, both the European Commission and Carnegie reports found that there is much still to be done and that the Code on Disinformation would benefit from better-shared terminology and structure. To that end, the EU recently adopted its Democracy Action Plan. Countering disinformation is one of its core pillars, with the effort to improve the EU’s existing tools and impose costs on perpetrators, especially on election interference; to move from Code of Practice to a co-regulatory framework of obligations and accountability of online platforms consistent with the Digital Services Act; and to set up a framework for monitoring the implementation of the code of practice. 

As can be seen, while companies have signed onto the EU Codes on Disinformation and Hate Speech, and member governments have pledged to follow their principles, oversight, and enforcement are separate, more difficult mechanisms to apply.  Nonetheless, with the force of other countries, in other regions, these codes or similar kinds of agreements could provide a framework for collaboration around various issues related to disinformation, hate speech, online violent extremism, and a host of other harmful forms of content.

Region Background
Global

Ranking Digital Rights Normative Frameworks

Ranking Digital Rights (RDR) ranks the world’s most powerful digital platforms and telecommunications companies on relevant commitments and policies, based on international human rights standards.

The RDR principles focus on three central pillars: Governance, Freedom of Expression, and Privacy.

For many years, technologists, academics, and other civil society representatives have worked together to push the private sector to address digital rights issues. An example is the Ranking Digital Rights, an initiative sponsored by the New America Foundation that focuses on creating a concrete framework to engage companies around normative issues related to the information space. Starting in 2015, Ranking Digital Rights has published a "Corporate Accountability Index" that ranks technology, telecom, and Internet companies on their commitments to human rights. This framework is rooted in international human rights principles such as the Universal Declaration of Human Rights (UDHR) and United Nations Guiding Principles on Business and Human Rights

The indicators cover principles related to governance, freedom of expression, and privacy and give companies a score based on their compliance with various aspects of the Index. Companies that are ranked by the Index include major players in social media, search, and other issues related to the information space including Facebook, Google, Microsoft, and Twitter. Their responsiveness to these principles provides indications of how initiatives either inspired by or analogous to Ranking Digital Rights can address social media, hate speech, and disinformation issues, while linking to older initiatives around corporate accountability that preceded it, such as the Global Network Initiative. 

Rebecca MacKinnon, a former journalist and digital rights scholar, board member of the Committee to Protect Journalists, and a founding member of the Global Network Initiative, created the Ranking Digital Rights project (RDR) in 2013 partly based on her book, Consent of the Networked. Nathalie Marechal, a Senior Policy Analyst at the project, details how the book was "one of the first pieces of research that honed in on the role that the private sector plays and tech companies specifically play in human rights violations both when they act as agents of governments as a result of government demands for data or demands for censorship, and as a result of companies pursuing their own business interests. The book ended with a call to action to push companies for transparency and more accountability for their role in enabling or perpetrating human rights violations."

The RDR principles focus on three central pillars: Governance, Freedom of Expression, and Privacy. From these central principles, the project developed indicators that serve to measure and evaluate a company's adherence to these core tenets. These were developed to apply not only to what they call "mobile and internet ecosystems” companies, but also telecommunications companies such as Verizon or T-Mobile. It divides its surveys into these two categories and assigns the companies scores out of 100 based on their compliance and adherence to the indicators under the principles. These scores are tabulated and combined into a final score that is explored in Indexes, which are backed by data and were published semi-annually from 2015 up until 2019, with a new edition due in 2021. 

The indexes are somewhat dynamic in that they evolve based on new technologies or developments in the field, as well as new scholarship, which has changed the categories that define the methodology, the indicators, and the companies reviewed. For instance, the mobile and internet ecosystem was known simply as the Internet in 2015 and renamed internet and mobile in 2017. The RDR project publishes the methodology openly and allows for others to adapt it under creative commons license to produce their own ratings, for instance for local or national companies. As a result, the RDR system has been replicated in contexts such as India, the Middle East, and Africa

This is part of a process the organization has developed to keep the principles relevant while also stable enough to provide data about how companies are improving or declining in terms of the index. This has helped to develop and expand the index to focus on 24 companies including telcos like AT&T and Telefónica as well as social media platforms and tech companies like Facebook, Google, Microsoft, and Twitter. This summary gives a general view of the RDR system and the areas and indicators it covers. It touches on information space issues in various ways and includes major technology companies with purview over a large scale, global social media networks, such as Facebook, Google, Microsoft, and Twitter. Within this system, they also consider properties these companies control, such as WhatsApp (Facebook) or Skype (Microsoft). These companies generally score similarly on the indicators, earning overall scores of 62 (Microsoft), 61 (Google), 57 (Facebook), and 55 (Twitter). By contrast, Chinese and Russian telecom companies score much lower, such as the Chinese tech giant Tencent (home to WeChat, QQ, and QZone) at 26, the search engine and tech services goliath Baidu at 23, or the Russian Yandex at 32. This certainly serves to contrast the approaches of companies in both authoritarian and democratic spheres of influence, and the contrast on human rights grounds that can be useful to emphasize, especially with regards to increasingly prevalent information integrity and disinformation issues. 

RDR Governance Indicators

G1. Policy commitment 

G2. Governance and management oversight

G3. Internal implementation

G4. Impact assessment

G5. Stakeholder engagement 

G6. Remedy

Under governance, the principles look for ways that a tech corporation governs itself and its products. This connects with the way that they manage their platforms, what kind of oversight they have in place, and particularly how they assess the impact that these platforms are having. As they note in their 2019 Index Report: "Indicator G4 evaluates if companies conduct risk assessments to evaluate and address the potential adverse impact of their business operations on users’ human rights. We expect companies to carry out credible and comprehensive due diligence to assess and manage risks related to how their products or services may impact users’ freedom of expression and privacy." This is increasingly becoming a key component of companies' policies concerning disinformation issues, and to how they can govern themselves effectively with regards to human rights concerns around freedom of expression and privacy issues in particular. 


The Index also notes how no company, including platforms like Facebook, Google, and Twitter, are making assessments about the impact of artificial intelligence or ways to "identify and manage the possible adverse effects of rules enforcement on users’ freedom of expression and privacy rights," nor risk assessments of the human rights implications of the design and implementation of their terms of service or targeted advertising systems. These internal public company policies are having huge impacts on the information environment, and RDR provides one means of evaluating them.

RDR Freedom of Expression Indicators

F1. Access to terms of service 

F2. Changes to terms of service

F3. Process for terms of service enforcement

F4. Data about terms of service enforcement

F5. Process for responding to third-party requests for content or account restriction

F6. Data about government requests for content or account restriction

F7. Data about private requests for content or account restriction

F8. User notification about content and account restriction

F9. Network management (telecommunications companies)

F10. Network shutdown (telecommunications companies)

F11. Identity policy


The freedom of expression indicators relates more specifically to the governance of the content in online platforms that are being evaluated. The terms of service help define the way that companies determine users’ rights in access, complaints, suspension, and takedown processes.


RDR evaluates how they have made information about these terms and changes to them available to users, and then secondarily provides publicly available information about the process through which takedowns or restrictions on content are made, as well as overall data about the kinds of takedowns there are. This also relates to the ways that governments make take-down requests and notes that Facebook, Google, and Twitter have all been making more data available about take-downs through transparency reports, except for government request-related data, which has become more limited. Facebook and Twitter have been releasing less data related to government requests for data, particularly in the case of requests on closed platforms like Facebook Messenger, WhatsApp, and Twitter's Periscope video platform.


It also looks at company policies around identity, if companies require users to provide government-issued ID or some other form of identification that could be tied to their real-world identity. This could allow for better identification of sources of disinformation and hate speech, or other nefarious users, but also creates potential avenues for targeting vulnerable users by governments, trolls, and others. They note that Google, Instagram, WhatsApp, and Twitter allow anonymous users across their platforms, but that Facebook requires identification, something that can create conflicting problems, particularly for vulnerable users.

RDR Privacy Indicators

P1. Access to privacy policies

P2. Changes to privacy policies

P3. Collection of user information

P4. Sharing of user information

P5. The purpose for collecting and sharing user information

P6. Retention of user information

P7. Users’ control over their own user information

P8. Users’ access to their own user information

P9. Collection of user information from third parties (internet companies)

P10. Process for responding to third-party requests for user information

P11. Data about third-party requests for user information

P12. User notification about third-party requests for user information

P13. Security oversight

P14. Addressing security vulnerabilities

P15. Data breaches

P16. Encryption of user communication and private content (internet, software, and device companies)

P17. Account Security (internet, software, and device companies)

P18. Inform and educate users about potential risks

Finally, in terms of privacy issues, RDR covers how different policies related to user data and information about how it is handled, how its security is ensured, how vulnerabilities are addressed, and how oversight and notification about breaches are addressed. While these issues may seem tangential to disinformation campaigns, they can actually have major impacts, as data that is taken from these companies can often be used in disinformation campaigns, users that are accessing content through weak security systems can be spied on by governments and other nefarious actors, and targets of disinformation campaigns or cyber-attacks may be unaware that they are even under attack without the proper systems for monitoring that their access is secure or to be notified in cases of breach. They also examine if companies inform users about potential "cyber risks," which they define as "[s]ituations in which a user’s security, privacy, or other related rights might be threatened by a malicious actor (including but not limited to criminals, insiders, or nation-states) who may gain unauthorized access to user data using hacking, phishing, or other deceptive techniques." This could include risks from targeted, online disinformation or harassment campaigns, particularly for vulnerable or marginalized users.

As a component of its ongoing review of tech practices and policies, RDR is evolving to examine issues around the ethical use of private data and algorithms to provide content. The 2020 Index, will include considerations of these issues based on its revision. It has already been revised over a period of several years to cover evolving information systems, such as mobile phones, social media, and other technologies. 

As Marechal notes: "We kept the methodology steady between 2017-2018 and for 2019 there were a couple of tweaks and we added companies every year, but by-and-large we kept it comparable for those three research cycles and there was measurable progress for most companies across the years in mid-2018. We started a project to revise and expand the RDR methodology and that was a project that I led, to account for human rights harms associated with two interrelated issues, business models based on targeted advertising and the use of algorithms. The use of what our funder calls it called AI and that we called algorithmic systems in consumer-facing products focusing specifically on their use for Content moderation and content governance." They have also translated the methodology into other languages, including Arabic, French, and Spanish. This provides a further basis to internationalize and localize the framework for various contexts globally. 

Region Background
Global Global Internet Forum to Counter Terrorism (GIFCT) fosters collaboration and information-sharing between the technology industry, government, civil society, and academia to counter terrorist and violent extremist activity online.

Terrorist organizations and individual actors have carried out attacks against civilians and critical infrastructure to instill fear, chaos, and reduce both geopolitical and internal cohesion of societies for a long time. Since the introduction of the internet and, most especially, social media, terrorist organizations have used the web to radicalize individuals, gain supporters, the technical “know-how” about building bombs and improvised explosive devices, and spread disinformation and propaganda to populations. What’s particularly noteworthy in recent years is the power of and the use of social media platforms by terrorist organizations. The 2019 Christchurch New Zealand Shooting, where the video of the shooter was initially posted on Twitch but reshared on YouTube, Facebook, and Twitter, provides a prime example of terrorists’ use of technology and the internet to spread their narratives and disinformation.  

In response to increased terrorist activity in the information environment, the Global Internet Forum for Counter-Terrorism (GIFCT) was formally established in 2017 by 4 core companies: Twitter, Microsoft, Facebook, and YouTube, as well as several smaller signatories that increased its reach across platforms. GIFCT has been designed to foster collaboration and information-sharing between industry partners to thwart terrorist actors’ ability to use the information environment to manipulate, radicalize, and exploit targeted populations. The four companies that made up the forum took turns in chairing the work of GIFCT. Following the Christchurch call to strengthen the coordinated response to terrorism in cyberspace through a multistakeholder process, GIFCT has become its own non-profit organization and is currently managed by its first inaugural Executive Director, Nicholas Rassmussen, former Director of the National Counterterrorism Center. The goals of GIFCT are:

  • Improve the capacity of a broad range of technology companies, independently and collectively, to prevent and respond to abuse of their digital platforms by terrorists and violent extremists.
  • Enable multi-stakeholder engagement around terrorist and violent extremist misuse of the internet and encourage stakeholders to meet key commitments consistent with the GIFCT mission.
  • Encourage those dedicated to online civil dialogue and empower efforts to direct positive alternatives to the messages of terrorists and violent extremists.
  • Advance broad understanding of terrorist and violent extremist operations and their evolution, including the intersection of online and offline activities.

A core aspect of GIFCT is knowledge sharing and cooperation, not only with the main tech platforms but with smaller ones as well. As such, GIFCT is working with Tech Against Terrorism, a private-public partnership launched by the UN Counter-Terrorism Executive Directorate (UN CTED). The goals of this effort are to provide resources and guidance to increase knowledge sharing within the tech industry; encourage peer learning and support amongst members; foster collaboration and information sharing between the tech sector, government, civil society, and academia; and promote greater understanding about ways that terrorists exploit the internet to achieve their objectives.

Paris Call for Trust and Security in Cyberspace

With the rise of both disinformation campaigns and cyberattacks in cyberspace, and the shared understanding of the need for increased collaboration and cooperation to foster technological innovation yet prevent attacks in cyberspace, a group of 78 countries, 29 public authorities, 349 organizations, and 648 companies have come together to align around a set of nine principles to create an open, secure, safe, and peaceful cyberspace. The Paris Call reaffirms these countries with the commitment to international humanitarian and customary international law that provides the same protections for citizens online the way these laws apply offline. In creating this call, governments, civil society, and industry, including social media companies, adhere to providing safety, stability, and security in cyberspace, as well as increased trust and transparency to citizens. The call has created a multi-stakeholder forum process for organizations and countries to come together to increase information sharing and collaboration. Participants to the Paris Call have signed onto the following nine principles:

  1. Prevent and recover from malicious cyber activities that threaten or cause significant, indiscriminate, or systemic harm to individuals and critical infrastructure.
  2. Prevent activity that intentionally and substantially damages the general availability or integrity of the public core of the Internet.
  3. Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities.
  4. Prevent ICT-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or to the commercial sector.
  5. Develop ways to prevent the proliferation of malicious software and practices intended to cause harm.
  6. Strengthen the security of digital processes, products, and services, throughout their lifecycle and supply chain.
  7. Support efforts to strengthen advanced cyber hygiene for all actors.
  8. Take steps to prevent non-State actors, including the private sector, from hacking-back, for their own purposes or those of other non-State actors.
  9. Promote the widespread acceptance and implementation of international norms of responsible behavior as well as confidence-building measures in cyberspace.

These principles have been signed onto by states such as Colombia, South Korea, and the UK, although not the United States initially, CSOs including IRI, IFES, and NDI; private sectors such as telecom (BT), social media (Facebook), and information technologies (Cisco, Microsoft); as well as a host of other companies. The Call provides a framework for normative standards related to cybersecurity and disinformation across sectors, particularly under the third principle focused on building capacity to resist malign influence in elections.

Parties are a critical component of political systems, and their adherence to normative frameworks is a challenging but central part of any political system’s susceptibility to disinformation and other negative forms of content. When candidates and parties adhere to normative standards, for instance, to refrain from the use of computational propaganda methods and the promotion of false narratives, it can have a positive effect on the integrity of information in political systems. When parties, particularly major players in political systems, refrain from endorsing these standards or actively work to adopt and adapt such kinds of misleading methods, such as disinformation campaigns and computational propaganda, this can have an incredibly harmful effect on the kind of content being promoted and the potential for false narratives, conspiracies, and hateful and violence-inducing speech to permeate and dominate campaigns. It is worth examining examples of parties working together to create positive standards for the information environment, as well as interventions for encouraging this kind of environment.

In the first system, parties can develop their own codes, either individually or collectively. One of the better examples of this is the German political parties during the 2017 parliamentary campaign season. Other than the right-wing Alliance for Germany (Afd) party, all of the parties agreed to the non-use of computational propaganda, the spread and endorsement of false narratives, and other tactics. Germany has a regulatory framework in the social media space, linked with EU regulations such as the Global Data Protection Regulation, which provides useful data privacy for European citizens as well as those who simply access European networks. 

In other cases, civil society can work together to induce parties to develop and adhere to codes of practice on disinformation, hate speech, and other information integrity issues. In Brazil, various civil society groups came together in the 2018 election to develop a public code of norms for parties and candidates to follow.  The NãoValeTudo campaign tried to encourage politicians to adopt the motto that "Not everything is acceptable" (Não Vale Tudo), which included not promoting false content, not engaging in false networks or the automating of accounts for false purposes, and other norms to ensure that the campaigns were acting fairly and in line with principles that would encourage an open and fair conversation about policy and society. This was formed by a consortium of groups including fact-checking groups like Aos Fatos, digital rights organizations such as Internet Lab and the Institute of Technology and Equity, and the national association of communications professionals (Associação Brasileira das Agéncias de Comunicaçao – ABRACOM). 

Country Background
Brazil #NãoValeTudo (Not everything is acceptable) is a code of ethics for politicians, civic groups, and parties to follow that was developed during the 2018 Brazilian election cycle. The code focuses on principles around the non-use of computational propaganda techniques such as bot or troll networks, the non-promotion of false claims, transparency around campaign use and non-abuse of private user data, and the promotion of a free and open information space. Politicians and parties could signal their support through social media posts tagging the phrase, which was supported by a wide coalition of CSOs.

The group declared that:

"recent examples concern us, as they indicate that activities such as the collection and misuse of personal data to target advertising, the use of robots and fake profiles to simulate political movements, and positions and methods of disseminating false information can have significant effects on rights of access to information, freedom of expression and association, and privacy of all and all of us. The protection of such rights seems to us to be a premise for technology to be a lever for political discussion and not a threat to the autonomy of citizens to debate about their future."

The group received some endorsements, most notably from presidential candidate Marina De Silva the former Minister of Environment for former President Lula De Silva's past government, and a relatively high-level candidate, who put out social media on her adherence, encouraging others to join. While other local candidates also endorsed them, they did not receive buy-in from others in the presidential race, including the eventual winner, Jair Bolsonaro. Nonetheless, they created a platform for discussion of disinformation issues and the acceptability of certain online tactics in the online sphere through the #NãoValeTudo hashtag and other methods, while also raising general awareness of these threats and highlighting how reluctant many campaigns and politicians were to embrace them. This methodology could be replicated by other civil society groups to develop standards for parties, call out those who break the rules, and raise awareness among the general public. 

In a third form, international coalitions have worked together to form normative frameworks.  Ahead of the 2019 Argentine Elections, in cooperation with Argentina's Council on Foreign Relations (CARI: Consejo Argentino para las Relaciones Internacionales) and organized by the National Electoral Council (CNE: Cámara Nacional Electoral), the Woodrow Wilson International Center for Scholars, the Annenberg Foundation, and International IDEA developed an Ethical Digital Commitment "with the aim of avoiding the dissemination of fake news and other mechanisms of disinformation that may negatively affect the elections.." Hosted by the CNE, parties; representatives of Google, Facebook, Twitter, and WhatsApp; organizations of media, and internet and technology professionals signed this Commitment. Parties and other organizations would help to both implement and provide oversight for it.  These approaches show practical, often multisectoral, approaches and collaboration between public, private, and political sectors, in addition to civil society, on these issues, following similar efforts by election management bodies in Indonesia and South Africa, as explained in the EMB section.

Similar, earlier codes have focused on hateful or dangerous speech in addition to other elections-related commitments, such as agreeing to accept a result. One such example developed in Nigeria ahead of its 2015 elections is how the presidential candidates pledged to avoid violent or inciting speech in the so-called "Abuja  Accord", developed with support from the international community and former UN Secretary-General Kofi Annan. This represented a particular effort to protect the rights of marginalized groups to participate in the electoral process and "to refrain from campaigns that will involve religious incitement, ethnic or tribal profiling both by ourselves and by all agents acting in our names. " In an effort more focused on information integrity itself, the Transnational Commission on Election Integrity, a group made up of a "bi-partisan group of political, tech, business and media leaders", developed The Pledge for Election Integrity for candidates of any country to sign. Its principles include:

The pledge has gained over 170 signatories in Europe, Canada, and the United States, and also has the potential to expand to other contexts. A commission named for the late Kofi Annan, former head of the UN, also endorsed the pledge, suggesting that it could be translated for other contexts: "We endorse the call by the Transnational Commission on Election Integrity for political candidates, parties, and groups to sign pledges to reject deceptive digital campaign practices. Such practices include the use of stolen data or materials, the use of manipulated imagery such as shallow fakes, deep fakes, and deep nudes, the production, use, or spread of falsified or fabricated materials, and collusion with foreign governments and their agents who seek to manipulate the election."  Nonetheless, with any of these pledges there remain challenges of enforcement and wide-ranging acceptance among political candidates, especially in polarized or deeply contested environments. Standards development in this area remains a challenge, but a potentially critical mechanism for building trust in candidates, parties, and overall democratic political systems.

Country Background
Global The Poynter Institute’s International Fact-Checking Network has developed a Code of Principles for fact-checkers to follow globally that includes standards around the methodology of the practice. Groups are vetted to ensure that they follow the standards and those that are found to be in compliance are admitted to the network. The network has become the basis for Facebook’s fact-checking initiative, among others that have proliferated globally in contexts ranging from the EU and US to countries across the global south.

 

Fact-checking and other forms of research are generally described in our section on civil society but the concept is derived from key normative frameworks in research and ethical mechanisms for building trust in industries, communities, and society as a whole. The Poynter Center's International Fact Checking Network (IFCN) is a network of newspapers, television, media groups, and civil society organizations that are certified by the IFCN to review content in ways that conform with international best practices. This is basically ensuring that the process and standards for fact-checking follow honest, unbiased guidelines and certify that the organizations and their staff understand and comply with these rules. IFCN standards link with earlier journalistic standards to source, develop, and publish stories, such as the Journalist's Creed, or national standards of journalism associations

Other standards have been developed specifically for journalists, such as The Trust Project, funded by Craig Newmark Philanthropies. The Trust Project designed a system of indicators about news organizations and journalists in order to ensure reliable information for the public and encourage trust in journalism. These have been created to create norms  that media organizations and social media can follow in order to maintain a standard of information released. This group has partnered with Google, Facebook, and Ring to "use the Trust Indicators in display and behind the scenes," according to their website, and has been endorsed by over 200 news organizations such as the BBC, El Pais, and the South China Morning Post. This project has also been translated and replicated in contexts such as Brazil, and invites journalists and organizations from around the world to join.

In a similar project, Reporters Without Borders, the Global Editors Network, the European Broadcasting Union, and Agence France Presse have formed a similar Journalism Trust Initiative (JTI) to create similar standards for journalism ethics and trustworthiness. The initiative "is a collaborative standard setting process according to the guidelines of CEN, the European Committee for Standardization" according to its explanation of the process on its website. Also funded by Newmark, through a multiyear, multistakeholder process to develop and validate standards starting in 2018, the JTI seeks to build norms among journalists, promoting compliance within the community of news-writing, particularly to combat mistrust in journalism and disinformation.

Poynter International Fact-Checking Network Standards
  • A Commitment to Nonpartisanship and Fairness
  • A Commitment to Transparency of Sources
  • A Commitment to Transparency of Funding and Organization
  • A Commitment to Transparency of Methodology
  • A Commitment to Open and Honest Corrections

The IFCN standards begin with nonpartisanship and fairness, something that is often difficult to guarantee in ethnically diverse, polarized, or politicized situations. Fact-checking groups must commit to following the same process for any fact check they do, and without bias towards content in terms of source, subject, or author. This ensures the fact-checkers are fair and neutral. They must also be transparent and show their sources and how they arrived at their answer, and this should be replicable and documented, with as much detail as possible. These groups must also be transparent about their funding sources and how they are organized and implement their work. Staff must understand this transparency and work to engage in their business in this way. The methodology that they use must also be presented and practiced in an open way so that anyone can understand or even replicate what the group is publishing. This creates an understanding of a fair and level system for reviewing and printing judgments about content. Finally, when the group gets something obviously wrong, they must agree to issue rapid and understandable corrections.

Groups take courses and pass tests showing that their systems and staff are cognizant of the standards and implement them in their practice. Groups also publish their standards, methodologies, and organizational and funding information publicly. The head of the IFCN Baybars Orsek described the process:

" Those organizations go through a thorough and rigorous application process involving external assessors and our Advisory Board and in positive cases end up being verified, and platforms...particularly social media companies like Facebook and others often use our certification as a necessary, but not a sufficient criteria to work with fact-checkers right now."

Verified organizations that pass these tests join the network, link with partner organizations, participate in training, collaborate on projects, and work with other clients as trusted fact-checkers, particularly social media companies such as Facebook that have engaged fact-checkers from the network in contexts all over the world. This concept is covered further in the Platform-Specific Engagement for Information Integrity topical section, but generally, groups that work with Facebook have their fact checks integrated directly into the Facebook application, allowing for the app itself to show the fact checks next to the content. While this methodology is still being developed, and the efficacy of fact checks continues to be difficult to confirm, it provides a much more visible, dynamic, and powerful system to apply them. Groups that want to join the network can apply and this can help ensure that when a project begins, they have a proper and complete understanding of the state of the field and best practices in terms of fact-checking work. 

Certain organizations have tried to expand normative frameworks beyond journalists and fact checkers to broader civil society, with varying degrees of success. The Certified Content Coalition had a goal of standardizing requirements for accurate content by bringing together various organizations in support of initiatives for new norms and standards. These groups consist of a research cohort of journalists, students, academics, policy-makers, technologists, and non-specialists interested in the mission of the program. The Certified Content Coalition’s goal is to create a widespread understanding of information being disseminated to the public in a way that is collaboratively agreed upon by groups, allowing for a greater sense of credibility. It ultimately stalled, with its founder Scott Yates noting, "[a]dvertisers said they wanted to support it, but in the end it seems that the advertising people were more interested in the perception of doing something than in actually doing something. (In hindsight, not shocking.)" This result potentially highlights the limits of these kinds of initiatives. 

The broader Pro-Truth Pledge is an educational nonpartisan nonprofit organization focused on science-based factual decision making. The pledge is for politicians and citizens to sign to commit to truthful political systems to promote facts and civic engagement. While it has a much wider potential reach, its application and the measurement of its effect is much more challenging. However, as with other norms, it has the potential to raise public awareness around information integrity issues, foster conversation, and potentially grow trust in good information and critical thinking around the bad.