2. Developing codes on disinformation, hate speech, and computational propaganda issues for the private sector

Updated On
Apr 06, 2021

As demonstrated by these preexisting examples, the private sector is one of the central components of the information ecosystem and has some internal guidelines and norms regulating how it is run. However, there are important normative frameworks that have both induced and encouraged compliance with global human rights and democratic frameworks, and specifically code focused on disinformation, hate speech, and related issues.

The companies that run large platforms in the information ecosystem, such as Facebook, Google, and Twitter, have a special responsibility for the internet's management and curation. There are certain normative frameworks, particularly within the European Union, that governments and civil society have developed to monitor, engage with, and potentially sanction tech companies. Their efficacy is based on a number of factors, including enforcement and oversight mechanisms in addition to more general threats from harmful media or general adherence to global human rights standards. 

The European Union is an important factor as it is a transnational body that has the power to define the conditions to operate in its market. This creates a greater incentive for companies to engage in cooperative frameworks with other private and public sectors as well as civil society actors in negotiation over their rights to operate on the continent. There is the implicit threat of regulation, for instance, the General Data Protection Regulation provides strong data protection that includes not only European citizens but also foreigners who are operating in the country or engaging in systems that are based within it. This implicit power to regulate ultimately provides a significant amount of normative and regulatory pressure on companies to comply if they want to engage in the European common market.  

This system creates powerful incentives and mechanisms for alignment with national law and transnational norms. These codes create some of the most powerful normative systems for enforcement around disinformation content, actors, and subjects anywhere in the world but have been challenged by difficulties in oversight and enforcement, while many of the principles would not be permissible in the U.S., particularly concerning potential first amendment infringements. The harmonization of these approaches internationally represents a key challenge in coming years, as various countries impose their own rules on the networks, platforms, and systems, influencing and contradicting each other.

 

Paragraphs
Region Background
European Union

The European Union developed a Code of Practice on Disinformation based on the findings of its High-Level Working Group on the issue. This included recommendations for companies operating in the EU, suggestions for developing media literacy programs for members responding to the issues, and developing technology supporting the code.

The five central pillars of the code are:

  • enhance the transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  • promote media and information literacy to counter disinformation and help users navigate the digital media environment;
  • develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  • safeguard the diversity and sustainability of the European news media ecosystem, and
  • promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

The European Union's Code of Practice on Disinformation is one of the more multinational and well-resourced initiatives in practice currently, as it has the support of the entire bloc and of its member governments behind its framework. The Code was developed by a European Commission-mandated working group on disinformation and contains recommendations for companies and other organizations that want to operate in the European Union. In addition to the Code, the EU provides member governments and countries that want to trade and work with the bloc with guidelines on how to organize their companies online, as well as plan for responses to disinformation through digital literacy, fact-checking, media, and support for civil society, among other interventions.

The Code was formulated and informed chiefly by the European High-Level Expert Group on Fake News and Online Disinformation in March 2018.  The group, composed of representatives from academia, civil society, media, and technology sectors, composed a report that included five central recommendations that later became the five pillars under which Code is organized. They are:

  1. enhance the transparency of online news, involving an adequate and privacy-compliant sharing of data about the systems that enable their circulation online;
  2. promote media and information literacy to counter disinformation and help users navigate the digital media environment;
  3. develop tools for empowering users and journalists to tackle disinformation and foster a positive engagement with fast-evolving information technologies;
  4. safeguard the diversity and sustainability of the European news media ecosystem, and
  5. promote continued research on the impact of disinformation in Europe to evaluate the measures taken by different actors and constantly adjust the necessary responses.

These principles were integrated into the Code, published in October 2018, roughly six months after the publication of the expert group's report. The European Union invited technology companies to sign on to the Code and many engaged, alongside other civil society stakeholders and EU institutions that worked to implement elements of these principles. Signatories included Facebook, Google, Microsoft, Mozilla, Twitter, as well as the European Association of Communication Agencies, and diverse communications and ad agencies. These groups committed not only to the principles, but to a series of annual reports on their progress in applying them, whether as communications professionals, advertising companies, or technology companies.

As participants in the initiative, the companies agree to a set of voluntary standards aimed at combating the spread of damaging fakes and falsehoods online and submit annual reports on their policies, products, and other initiatives to conform with its guidelines. The initiative has been a modest success in engaging platforms in dialogue with the EU around these issues and addressing them with members governments, other private sector actors, and citizens.

The annual reports of these companies and the overall assessment of the implementation of the Code of Practice on Disinformation review the progress that the code has made in its first year of existence, from October 2018-2019. The reports find that while the Code has generally made progress in imbuing certain aspects of its five central principles in the private sector signatories, it has been limited by its "self-regulatory nature, the lack of uniformity of implementation and the lack of clarity around its scope and some of the key concepts." 

An assessment from September 2020 found that the code had made modest progress but had fallen short in several ways, and provided recommendations for improvement. It notes that "[t]he information and findings set out in this assessment will support the Commission’s reflections on pertinent policy initiatives, including the European Democracy Action, as well as the Digital Services Act, which will aim to fix overarching rules applicable to all information society services." This helps describe how the Code on Disinformation fits within a larger program of European initiatives, linking with similar codes on hate speech moderation, related efforts to ensure user privacy, copyright protection, and cybersecurity, and broader efforts to promote democratic principles in the online space.

Other organizations have made independent assessments that offer their own perspective on the European Commission's project. The project commissioned a consulting firm, Valdani, Vicari, and Associates (VVA), to review the project as well, and it found that: 

  • "The Code of Practice should not be abandoned. It has established a common framework to tackle disinformation, its aims and activities are highly relevant and it has produced positive results. It constitutes a first and crucial step in the fight against disinformation and shows European leadership on an issue that is international in nature.
  • Some drawbacks related to its self-regulatory nature, the lack of uniformity of implementation and the lack of clarity around its scope and some of the key concepts.
  • The implementation of the Code should continue and its effectiveness could be strengthened by agreeing on terminology and definitions."

The Carnegie Endowment for International Peace completed an assessment in a similar period after the completion of its first year of implementation, published in March 2020. The author found that the EU had indeed made progress in areas such as media and information literacy, where several technology signatories have created programs for users on these concepts, such as Facebook, Google, and Twitter.

The EU Code of Practice on Disinformation’s normative framework follows similar, related examples that describe and develop a component of the European Union's position, namely the 2016 EU Code of Conduct on Countering Illegal Hate Speech. This 2016 EU Code of Conduct links with the earlier "Framework Decision 2008/913/JHA of 28 November 2008 combating certain forms and expressions of racism and xenophobia by means of criminal law" and national laws transposing it, means all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, color, religion, descent or national or ethnic origin." Alternatively, organizations such as the Center for Democracy and Technology have criticized the EU's approach and potential for misuse and abuse, particularly in regards to the code on hate speech. 

Overall, both the European Commission and Carnegie reports found that there is much still to be done and that the Code on Disinformation would benefit from better-shared terminology and structure. To that end, the EU recently adopted its Democracy Action Plan. Countering disinformation is one of its core pillars, with the effort to improve the EU’s existing tools and impose costs on perpetrators, especially on election interference; to move from Code of Practice to a co-regulatory framework of obligations and accountability of online platforms consistent with the Digital Services Act; and to set up a framework for monitoring the implementation of the code of practice. 

As can be seen, while companies have signed onto the EU Codes on Disinformation and Hate Speech, and member governments have pledged to follow their principles, oversight, and enforcement are separate, more difficult mechanisms to apply.  Nonetheless, with the force of other countries, in other regions, these codes or similar kinds of agreements could provide a framework for collaboration around various issues related to disinformation, hate speech, online violent extremism, and a host of other harmful forms of content.

Region Background
Global

Ranking Digital Rights Normative Frameworks

Ranking Digital Rights (RDR) ranks the world’s most powerful digital platforms and telecommunications companies on relevant commitments and policies, based on international human rights standards.

The RDR principles focus on three central pillars: Governance, Freedom of Expression, and Privacy.

For many years, technologists, academics, and other civil society representatives have worked together to push the private sector to address digital rights issues. An example is the Ranking Digital Rights, an initiative sponsored by the New America Foundation that focuses on creating a concrete framework to engage companies around normative issues related to the information space. Starting in 2015, Ranking Digital Rights has published a "Corporate Accountability Index" that ranks technology, telecom, and Internet companies on their commitments to human rights. This framework is rooted in international human rights principles such as the Universal Declaration of Human Rights (UDHR) and United Nations Guiding Principles on Business and Human Rights

The indicators cover principles related to governance, freedom of expression, and privacy and give companies a score based on their compliance with various aspects of the Index. Companies that are ranked by the Index include major players in social media, search, and other issues related to the information space including Facebook, Google, Microsoft, and Twitter. Their responsiveness to these principles provides indications of how initiatives either inspired by or analogous to Ranking Digital Rights can address social media, hate speech, and disinformation issues, while linking to older initiatives around corporate accountability that preceded it, such as the Global Network Initiative. 

Rebecca MacKinnon, a former journalist and digital rights scholar, board member of the Committee to Protect Journalists, and a founding member of the Global Network Initiative, created the Ranking Digital Rights project (RDR) in 2013 partly based on her book, Consent of the Networked. Nathalie Marechal, a Senior Policy Analyst at the project, details how the book was "one of the first pieces of research that honed in on the role that the private sector plays and tech companies specifically play in human rights violations both when they act as agents of governments as a result of government demands for data or demands for censorship, and as a result of companies pursuing their own business interests. The book ended with a call to action to push companies for transparency and more accountability for their role in enabling or perpetrating human rights violations."

The RDR principles focus on three central pillars: Governance, Freedom of Expression, and Privacy. From these central principles, the project developed indicators that serve to measure and evaluate a company's adherence to these core tenets. These were developed to apply not only to what they call "mobile and internet ecosystems” companies, but also telecommunications companies such as Verizon or T-Mobile. It divides its surveys into these two categories and assigns the companies scores out of 100 based on their compliance and adherence to the indicators under the principles. These scores are tabulated and combined into a final score that is explored in Indexes, which are backed by data and were published semi-annually from 2015 up until 2019, with a new edition due in 2021. 

The indexes are somewhat dynamic in that they evolve based on new technologies or developments in the field, as well as new scholarship, which has changed the categories that define the methodology, the indicators, and the companies reviewed. For instance, the mobile and internet ecosystem was known simply as the Internet in 2015 and renamed internet and mobile in 2017. The RDR project publishes the methodology openly and allows for others to adapt it under creative commons license to produce their own ratings, for instance for local or national companies. As a result, the RDR system has been replicated in contexts such as India, the Middle East, and Africa

This is part of a process the organization has developed to keep the principles relevant while also stable enough to provide data about how companies are improving or declining in terms of the index. This has helped to develop and expand the index to focus on 24 companies including telcos like AT&T and Telefónica as well as social media platforms and tech companies like Facebook, Google, Microsoft, and Twitter. This summary gives a general view of the RDR system and the areas and indicators it covers. It touches on information space issues in various ways and includes major technology companies with purview over a large scale, global social media networks, such as Facebook, Google, Microsoft, and Twitter. Within this system, they also consider properties these companies control, such as WhatsApp (Facebook) or Skype (Microsoft). These companies generally score similarly on the indicators, earning overall scores of 62 (Microsoft), 61 (Google), 57 (Facebook), and 55 (Twitter). By contrast, Chinese and Russian telecom companies score much lower, such as the Chinese tech giant Tencent (home to WeChat, QQ, and QZone) at 26, the search engine and tech services goliath Baidu at 23, or the Russian Yandex at 32. This certainly serves to contrast the approaches of companies in both authoritarian and democratic spheres of influence, and the contrast on human rights grounds that can be useful to emphasize, especially with regards to increasingly prevalent information integrity and disinformation issues. 

RDR Governance Indicators

G1. Policy commitment 

G2. Governance and management oversight

G3. Internal implementation

G4. Impact assessment

G5. Stakeholder engagement 

G6. Remedy

Under governance, the principles look for ways that a tech corporation governs itself and its products. This connects with the way that they manage their platforms, what kind of oversight they have in place, and particularly how they assess the impact that these platforms are having. As they note in their 2019 Index Report: "Indicator G4 evaluates if companies conduct risk assessments to evaluate and address the potential adverse impact of their business operations on users’ human rights. We expect companies to carry out credible and comprehensive due diligence to assess and manage risks related to how their products or services may impact users’ freedom of expression and privacy." This is increasingly becoming a key component of companies' policies concerning disinformation issues, and to how they can govern themselves effectively with regards to human rights concerns around freedom of expression and privacy issues in particular. 


The Index also notes how no company, including platforms like Facebook, Google, and Twitter, are making assessments about the impact of artificial intelligence or ways to "identify and manage the possible adverse effects of rules enforcement on users’ freedom of expression and privacy rights," nor risk assessments of the human rights implications of the design and implementation of their terms of service or targeted advertising systems. These internal public company policies are having huge impacts on the information environment, and RDR provides one means of evaluating them.

RDR Freedom of Expression Indicators

F1. Access to terms of service 

F2. Changes to terms of service

F3. Process for terms of service enforcement

F4. Data about terms of service enforcement

F5. Process for responding to third-party requests for content or account restriction

F6. Data about government requests for content or account restriction

F7. Data about private requests for content or account restriction

F8. User notification about content and account restriction

F9. Network management (telecommunications companies)

F10. Network shutdown (telecommunications companies)

F11. Identity policy


The freedom of expression indicators relates more specifically to the governance of the content in online platforms that are being evaluated. The terms of service help define the way that companies determine users’ rights in access, complaints, suspension, and takedown processes.


RDR evaluates how they have made information about these terms and changes to them available to users, and then secondarily provides publicly available information about the process through which takedowns or restrictions on content are made, as well as overall data about the kinds of takedowns there are. This also relates to the ways that governments make take-down requests and notes that Facebook, Google, and Twitter have all been making more data available about take-downs through transparency reports, except for government request-related data, which has become more limited. Facebook and Twitter have been releasing less data related to government requests for data, particularly in the case of requests on closed platforms like Facebook Messenger, WhatsApp, and Twitter's Periscope video platform.


It also looks at company policies around identity, if companies require users to provide government-issued ID or some other form of identification that could be tied to their real-world identity. This could allow for better identification of sources of disinformation and hate speech, or other nefarious users, but also creates potential avenues for targeting vulnerable users by governments, trolls, and others. They note that Google, Instagram, WhatsApp, and Twitter allow anonymous users across their platforms, but that Facebook requires identification, something that can create conflicting problems, particularly for vulnerable users.

RDR Privacy Indicators

P1. Access to privacy policies

P2. Changes to privacy policies

P3. Collection of user information

P4. Sharing of user information

P5. The purpose for collecting and sharing user information

P6. Retention of user information

P7. Users’ control over their own user information

P8. Users’ access to their own user information

P9. Collection of user information from third parties (internet companies)

P10. Process for responding to third-party requests for user information

P11. Data about third-party requests for user information

P12. User notification about third-party requests for user information

P13. Security oversight

P14. Addressing security vulnerabilities

P15. Data breaches

P16. Encryption of user communication and private content (internet, software, and device companies)

P17. Account Security (internet, software, and device companies)

P18. Inform and educate users about potential risks

Finally, in terms of privacy issues, RDR covers how different policies related to user data and information about how it is handled, how its security is ensured, how vulnerabilities are addressed, and how oversight and notification about breaches are addressed. While these issues may seem tangential to disinformation campaigns, they can actually have major impacts, as data that is taken from these companies can often be used in disinformation campaigns, users that are accessing content through weak security systems can be spied on by governments and other nefarious actors, and targets of disinformation campaigns or cyber-attacks may be unaware that they are even under attack without the proper systems for monitoring that their access is secure or to be notified in cases of breach. They also examine if companies inform users about potential "cyber risks," which they define as "[s]ituations in which a user’s security, privacy, or other related rights might be threatened by a malicious actor (including but not limited to criminals, insiders, or nation-states) who may gain unauthorized access to user data using hacking, phishing, or other deceptive techniques." This could include risks from targeted, online disinformation or harassment campaigns, particularly for vulnerable or marginalized users.

As a component of its ongoing review of tech practices and policies, RDR is evolving to examine issues around the ethical use of private data and algorithms to provide content. The 2020 Index, will include considerations of these issues based on its revision. It has already been revised over a period of several years to cover evolving information systems, such as mobile phones, social media, and other technologies. 

As Marechal notes: "We kept the methodology steady between 2017-2018 and for 2019 there were a couple of tweaks and we added companies every year, but by-and-large we kept it comparable for those three research cycles and there was measurable progress for most companies across the years in mid-2018. We started a project to revise and expand the RDR methodology and that was a project that I led, to account for human rights harms associated with two interrelated issues, business models based on targeted advertising and the use of algorithms. The use of what our funder calls it called AI and that we called algorithmic systems in consumer-facing products focusing specifically on their use for Content moderation and content governance." They have also translated the methodology into other languages, including Arabic, French, and Spanish. This provides a further basis to internationalize and localize the framework for various contexts globally. 

Region Background
Global Global Internet Forum to Counter Terrorism (GIFCT) fosters collaboration and information-sharing between the technology industry, government, civil society, and academia to counter terrorist and violent extremist activity online.

Terrorist organizations and individual actors have carried out attacks against civilians and critical infrastructure to instill fear, chaos, and reduce both geopolitical and internal cohesion of societies for a long time. Since the introduction of the internet and, most especially, social media, terrorist organizations have used the web to radicalize individuals, gain supporters, the technical “know-how” about building bombs and improvised explosive devices, and spread disinformation and propaganda to populations. What’s particularly noteworthy in recent years is the power of and the use of social media platforms by terrorist organizations. The 2019 Christchurch New Zealand Shooting, where the video of the shooter was initially posted on Twitch but reshared on YouTube, Facebook, and Twitter, provides a prime example of terrorists’ use of technology and the internet to spread their narratives and disinformation.  

In response to increased terrorist activity in the information environment, the Global Internet Forum for Counter-Terrorism (GIFCT) was formally established in 2017 by 4 core companies: Twitter, Microsoft, Facebook, and YouTube, as well as several smaller signatories that increased its reach across platforms. GIFCT has been designed to foster collaboration and information-sharing between industry partners to thwart terrorist actors’ ability to use the information environment to manipulate, radicalize, and exploit targeted populations. The four companies that made up the forum took turns in chairing the work of GIFCT. Following the Christchurch call to strengthen the coordinated response to terrorism in cyberspace through a multistakeholder process, GIFCT has become its own non-profit organization and is currently managed by its first inaugural Executive Director, Nicholas Rassmussen, former Director of the National Counterterrorism Center. The goals of GIFCT are:

  • Improve the capacity of a broad range of technology companies, independently and collectively, to prevent and respond to abuse of their digital platforms by terrorists and violent extremists.
  • Enable multi-stakeholder engagement around terrorist and violent extremist misuse of the internet and encourage stakeholders to meet key commitments consistent with the GIFCT mission.
  • Encourage those dedicated to online civil dialogue and empower efforts to direct positive alternatives to the messages of terrorists and violent extremists.
  • Advance broad understanding of terrorist and violent extremist operations and their evolution, including the intersection of online and offline activities.

A core aspect of GIFCT is knowledge sharing and cooperation, not only with the main tech platforms but with smaller ones as well. As such, GIFCT is working with Tech Against Terrorism, a private-public partnership launched by the UN Counter-Terrorism Executive Directorate (UN CTED). The goals of this effort are to provide resources and guidance to increase knowledge sharing within the tech industry; encourage peer learning and support amongst members; foster collaboration and information sharing between the tech sector, government, civil society, and academia; and promote greater understanding about ways that terrorists exploit the internet to achieve their objectives.

Paris Call for Trust and Security in Cyberspace

With the rise of both disinformation campaigns and cyberattacks in cyberspace, and the shared understanding of the need for increased collaboration and cooperation to foster technological innovation yet prevent attacks in cyberspace, a group of 78 countries, 29 public authorities, 349 organizations, and 648 companies have come together to align around a set of nine principles to create an open, secure, safe, and peaceful cyberspace. The Paris Call reaffirms these countries with the commitment to international humanitarian and customary international law that provides the same protections for citizens online the way these laws apply offline. In creating this call, governments, civil society, and industry, including social media companies, adhere to providing safety, stability, and security in cyberspace, as well as increased trust and transparency to citizens. The call has created a multi-stakeholder forum process for organizations and countries to come together to increase information sharing and collaboration. Participants to the Paris Call have signed onto the following nine principles:

  1. Prevent and recover from malicious cyber activities that threaten or cause significant, indiscriminate, or systemic harm to individuals and critical infrastructure.
  2. Prevent activity that intentionally and substantially damages the general availability or integrity of the public core of the Internet.
  3. Strengthen our capacity to prevent malign interference by foreign actors aimed at undermining electoral processes through malicious cyber activities.
  4. Prevent ICT-enabled theft of intellectual property, including trade secrets or other confidential business information, with the intent of providing competitive advantages to companies or to the commercial sector.
  5. Develop ways to prevent the proliferation of malicious software and practices intended to cause harm.
  6. Strengthen the security of digital processes, products, and services, throughout their lifecycle and supply chain.
  7. Support efforts to strengthen advanced cyber hygiene for all actors.
  8. Take steps to prevent non-State actors, including the private sector, from hacking-back, for their own purposes or those of other non-State actors.
  9. Promote the widespread acceptance and implementation of international norms of responsible behavior as well as confidence-building measures in cyberspace.

These principles have been signed onto by states such as Colombia, South Korea, and the UK, although not the United States initially, CSOs including IRI, IFES, and NDI; private sectors such as telecom (BT), social media (Facebook), and information technologies (Cisco, Microsoft); as well as a host of other companies. The Call provides a framework for normative standards related to cybersecurity and disinformation across sectors, particularly under the third principle focused on building capacity to resist malign influence in elections.

Footnotes

1. These include the E-Commerce Directive, the Audio-visual Media Services Directive, the Copyright Directive, the General Data Protection Regulation, the Directive on the Security of Network and Information Services, the Code of Conduct on Illegal Hate Speech Online, and other online EU regulations.

2. The EU Code of conduct on countering illegal hate speech online (2016). https://ec.europa.eu/info/policies/justice-and-fundamental-rights/combatting-discrimination/racism-and-xenophobia/eu-code-conduct-countering-illegal-hate-speech-online_en.

3. https://rankingdigitalrights.org/index2019/report/governance/

4. https://rankingdigitalrights.org/index2019/report/governance/