1. Related multistakeholder norms for cybersecurity, internet freedom, and governance issues

Updated On
Apr 05, 2021

Many normative frameworks have developed to govern the online space, addressing issues related to traditional human rights concepts such as freedom of expression, privacy, and good governance. Some of these connect with building normative standards for the online space around disinformation to help promote information integrity but address different aspects of the Internet, technology, and network governance. The Global Network Initiative (GNI) is an older example, which formed in 2008 after two years of development, in an effort to encourage technology companies to respect the freedom of expression and privacy rights of users. The components link with information integrity principles, first by ensuring that the public sphere is open for freedom of expression, secondly by ensuring that user data is protected and not misused by malicious actors potentially to target them with disinformation, computational propaganda, or other forms of harmful content.

Paragraphs

Highlight


The GNI Principles, centered around concepts including freedom of expression, privacy, governance, accountability, and transparency, provide a framework for companies to apply human rights principles to their practices, while the Implementation Guidelines serve as a mechanism for them to be applied in responding to government censorship and surveillance demands.

The GNI also serves as a mechanism for collective action among civil society organizations and other stakeholders in advocating for better-informed regulation around Information Communication Technologies (ICTs), including social media, to promote principles of freedom of expression and privacy. This includes advisory networks such as the Christchurch Call Network and Freedom Online Coalition, as well as participation in multi-sectoral, international bodies, focused on the issues related to online extremism and digital rights, such as those sponsored by the United Nations and Council of Europe.

Region Background
Global

The Global Network Initiative is an international coalition that seeks to harness collaboration with the technology companies to support The GNI Principles (“the Principles”) and Implementation Guidelines that provide an evolving framework for responsible company decision-making in support of freedom of expression and privacy rights. As our company participation expands, the Principles are taking root as the global standard for human rights in the ICT sector. The GNI also collectively advocates governments and international institutions for laws and policies that promote and protect freedom of expression and privacy for instance through instruments such as the International Covenant on Civil and Political Rights, and subsequently, the United Nations Guiding Principles on Business and Human Rights. It has assessed companies including Facebook, Google, LinkedIn, and Microsoft.

GNI Principles: 

  • Freedom of Expression
  • Privacy
  • Responsible Company Decision Making
  • Multi-Stakeholder Collaboration
  • Governance, Accountability, and Transparency

In October 2008, representatives of technology companies, civil society, socially responsible investors, and academia released the Global Network Initiative. After two years of discussions, they released a set of principles focused primarily on how companies that manage Internet technologies could ensure freedom of expression and privacy on their networks. They also established guidelines for the implementation of these principles. Tech companies with assets related to disinformation, social media, and the overall information space include Facebook, Google, and Microsoft. Representatives from civil society include the Center for Democracy and Technology, Internews, and Human Rights Watch, as well as representatives from the Global South such as the Colombian Karisma Foundation, and the Center for Internet and Society in India.

Every two years, the GNI publishes an assessment of the companies engaged in the initiative, gauging their adherence to the principles and their success in implementing aspects of them. The latest version was published in April 2020, covering 2018 and 2019. The principles related to freedom of expression are related to disinformation issues but focus more on companies allowing for freedom of expression rather than preventing the potential harms that come from malicious forms of content such as disinformation and hate speech.

These standards and the GNI have encouraged greater interaction between tech companies and representatives from academia, media, and civil society, and greater consultation on issues related to information integrity, particularly censorship and content moderation. For instance, a Fake News law in Brazil would require "traceability" of users, or registration with government documents within Facebook and other social networks wishing to operate in the country, so that they can be identified for sanction in the case that they are spreading disinformation. This would conflict with the GNI's privacy provisions that ensure users are allowed anonymous access to networks. The GNI released a statement calling out these issues and has advocated against the proposed law. This shows how this framework can be used for joint advocacy through a multi-stakeholder effort, although its efficacy is less clear.  Nonetheless, the GNI has helped form a foundation for other efforts that have since developed, including the Santa Clara Principles on Content Moderation and the EU Codes on Disinformation and Hate Speech that have focused more specifically on social media issues.

Other groups have focused on developing standards linking human rights and other online norms with democratic principles. The Luminate Group's Digital Democracy Charter, for example, created a list of rights and responsibilities for the digital media environment and politics. The DDC "seeks to build stronger societies through a reform agenda -- remove, reduce, signal, audit, privacy, compete, secure, educate, and inform." In a similar vein, the National Democratic Institute, supported in part by the CEPPS partners, has developed the Democratic Principles for the Information Space, which aim partly to address digital rights issues and counter harmful speech online through democratic standards for platform policies, content moderation, and products.

Region Background
Global

The Manilla Principles on Intermediary Liability 

  • Define various principles for intermediary companies to follow when operating in democratic and authoritarian environments, including that: Intermediaries should be shielded from liability for third-party content; Content must not be required to be restricted without an order by a judicial authority; Requests for restrictions of content must be clear, be unambiguous, and follow due process; Laws and content restriction orders and practices must comply with the tests of necessity and proportionality
  • Laws and content restriction policies and practices must respect due process; Transparency and accountability must be built into laws and content restriction policies and practices.

The Manilla Principles on Intermediary Liability were developed in 2014 by a group of organizations and experts focused on technology policy and law from around the world. Principle drafters include the Electronic Frontier Foundation, the Center for Internet and Society from India, KICTANET (Kenya), Derechos Digitales (Chile), and Open Net (South Korea) representing a wide range of technology perspectives and regions. They relate to questions of liability for content on networks that have arisen in the US and Europe around Section 230 of the Communications Decency Act of 1996 or Germany's Network Enforcement Act (NetzDG) of 2017. 

 

Manilla Principles on Intermediary Liability

1 Intermediaries should be shielded from liability for third-party content

2 Content must not be required to be restricted without an order by a judicial authority

3 Requests for restrictions of content must be clear, be unambiguous, and follow due process

4 Laws and content restriction orders and practices must comply with the tests of necessity and proportionality

5 Laws and content restriction policies and practices must respect due process

6 Transparency and accountability must be built into laws and content restriction policies and practices

They agreed upon basic standards holding that intermediaries like Facebook, Google, and Twitter, that host content or manage it in some way, should abide by basic democratic standards, while governments should also respect certain norms regarding regulations and other forms of control of content and networks. Their manifesto stated:

"All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association, and the right to privacy. With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.

Their principles follow, holding that intermediaries should have legal mechanisms that shield them from liability for the content that they host on their servers. This principle serves to provide for an open conversation and manageable systems of moderation. Secondly, in this vein, the principles assert that content should not be easily restricted without judicial orders, and these must be clear and follow due process. Thirdly, these orders and related practices should comply with tests for necessity and proportionality, or they should be reasonably necessary and proportional to the gravity of the crime or mistake. Finally, transparency and accountability for these laws should be built into any of these legal systems, so that all can see how they operate and are being applied. 

These systems and principles have provided a way for the signatories and other civil society organizations to evaluate how countries are managing online systems, and how platforms can manage their content and apply democratic norms to their own practices. Various organizations have signed on, ranging from media NGOs and organizations, human rights and policy groups, as well as civic technologists. This technical and geographic diversity gives these principles the backing and links to content creators, policymakers, providers, and infrastructure managers, from all over the world. They provide one practical means for organizations to work together to monitor and manage these policies and systems related to the information space and in certain cases lobby for changes in them.

"These principles were developed in the wake of a conference at Santa Clara University in 2018. At Santa Clara in 2018, we held the first-of-its-kind conference on content moderation at scale. Most [companies] had not disclosed at all what they were doing. Their policies were about content moderation and how they were applying them. So we co-organized the day-long conference and ahead of this conference a small subgroup of academics and activists organized by the Electronic Frontier Foundation met separately and had a whole conversation and it was out of that sort of side meeting that the Santa Clara principles arose." - Irina Racu, Director of the Internet Ethics Program at Santa Clara's Center for Applied Ethics1

Region Background
Global

The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large user bases. The principles include that:

  • Companies should publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines
  • Companies should provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  • Companies should provide a meaningful opportunity for timely appeal of any content removal or account suspension.

The Santa Clara Principles On Transparency and Accountability in Content Moderation developed as a means of assessing how companies are working to develop policies and systems governing the systems that keep track and organize the content that flows on them. Generally, they focus on ensuring that companies have policies that publicize the number of posts removed and accounts banned, provide notice to users when that is done, and provide systems for appeal. Irina Racu, the Director of the Internet Ethics Program at Santa Clara's Center for Applied Ethics, was one of the founders of the project and is a continuing member. She describes how it began:

"Once drafted, various companies signed on in support of them, including social media giants such as Facebook, Instagram, Reddit, and Twitter."

The principles are organized around three overarching themes: Numbers, Notice and Appeal. Under numbers, platforms agree that companies should keep track and inform the public on the numbers of posts that are reported and accounts that are suspended, blocked, or flagged in a regular report that is machine-readable. Secondly, in terms of the notice, users and others who are impacted by these policies should be notified of these takedowns or other forms of content moderation in open and transparent ways. These rules should be published and understood publicly by all users, regardless of background. If governments are involved, say to request a takedown, users should be apprised as well, but generally, those who report and manage these systems should have their anonymity maintained. Thirdly, there should be clearly defined processes of appeal for these decisions in place. Appeals should be reviewed and managed by humans, not machines, suggesting mechanisms that groups like the Facebook oversight board will attempt to build. However, the principles hold that these practices should be built into all content moderation, not only high-level systems. 

These principles have been applied in various ways to draw attention to how companies have developed content moderation systems. One notable application has been the Electronic Frontier Foundation's "Who Has Your Back" reports. These reports, released annually, rate companies on the basis of their adherence to the Santa Clara Principles while rating them directly on other metrics as well, such as transparency and notice to users. In their report, EFF notes that 12 of the 16 companies rated in 2019 endorsed the principles, suggesting that there is some buy-in for the concept. Companies like Reddit adhere to all of the principles, while others like Facebook or Twitter achieve only two or three. With many social media companies still falling short, and international or other new players entering the market, it will remain a challenging effort to apply globally.

Footnotes

1Interview by Daniel Arnaudo, National Democratic Institute, with Irina Racu, September 11, 2020.