The debate over what content should be allowable on social media platforms is global in scope. Analysis on this topic is prolific and global consensus is unlikely to emerge given legitimate and differing definitions of the bounds that can and should be placed on speech and expression. Many of these measures that introduce liability for all content have hate speech as a central component. While hate speech is not limited to political or electoral periods, placing pressure on societal fault lines through the online amplification of hate speech is a common tactic used in political propaganda and by disinformation actors during electoral periods.
Some national jurisdictions have attempted to introduce varying degrees of platform responsibility for all the content hosted on their platforms, regardless of whether that is organic or paid content.
The German Network Enforcement Act (NetzDG) requires social media companies to delete “manifestly unlawful” content within 24 hours of being notified. Other illegal content must be reviewed within seven days of being reported and deleted if found to be in violation of the law. Failure to comply carries up to a 5 million euro fine, though the law exempts providers who have fewer than 2 million users registered in Germany. The law does not actually create new categories of illegal content; its purpose is to require social media platforms to enforce 22 statutes on online content that already exist in the German code. It targets already-unlawful content such as “public incitement to crime,” “violation of intimate privacy by taking photographs,” defamation, “treasonous forgery,” forming criminal or terrorist organizations, and “dissemination of depictions of violence.” It also includes Germany’s well-known prohibition of glorification of Nazism and Holocaust denial. The takedown process does not require a court order or provide a clear appeals mechanism, relying on online platforms to make these determinations.25
The law has been criticized as being too broad and vague in its differentiation of “unlawful content” and “manifestly unlawful content.” Some critics also object to NetzDG as a “privatized enforcement” law because online platforms assess the legality of the content, rather than courts or other democratically legitimate institutions. It is also credited with inspiring a number of copycat laws in countries where the potential for censoring legitimate expression is high. As of late 2019, Foreign Policy identified 13 countries that had introduced similar laws; the majority of these countries were ranked as “not free” or “partly free” in Freedom House’s 2019 Freedom of the Internet assessment.26
France, which has pre-existing rules restricting hate speech, also introduced measures similar to those in Germany to govern content online. However, the French constitutional court overturned these measures in 2020, which similar to the German law would have required platforms to review and remove hateful content flagged by users within 24 hours or face fines. The court ruled that the provisions in the law would lead platforms to adopt an overly conservative attitude toward removing content in order to avoid fines, thus restricting legitimate expression.
The United Kingdom is another frequently cited example that illustrates various approaches to regulating harmful online content, including disinformation. A 2019 Online Harms White Paper outlining the UK government’s plan for online safety proposed placing a statutory duty of care on internet companies for the protection of their users, with oversight by an independent regulator. A public consultation period for the Online Harms Paper informed proposed legislation in 2020 that focuses on making the companies responsible for the systems they have in place to protect users from harmful content. Rather than require companies to remove specific pieces of content, the new framework would require the platforms to provide clear policies on the content and behavior that are acceptable on their sites and enforce these standards consistently and transparently.
These approaches contrast with the Bulgarian framework, for example, which exempts social media platforms from editorial responsibility.27 Section 230 of the Communications Decency Act of the United States law also expressly releases social media platforms from vicarious liability.
Other laws have been proposed or enacted in countries around the globe that introduce some degree of liability or responsibility for platforms to moderate harmful content on their platforms. Broadly speaking, this category of regulatory response is the subject of fierce debate on the potential for censorship and abuse. The models in Germany, France, and the United Kingdom have frequently cited examples of attempts by consolidated democracies to more actively imposing a duty on platforms for the content they host while incorporating sufficient checks to protect freedom of expression – though measures in all three countries are also criticized for the ways they have attempted to strike this balance. These different approaches also illustrate how a proliferation of national laws introducing platform liability is poised to place a multitude of potentially contradictory obligations on social media companies.
Some jurisdictions prohibit paid campaign advertising in traditional media outright, with that ban extending or potentially extending to paid advertising on social media.28 “For decades, paid political advertising on television has been completely banned during elections in many European democracies. These political advertising bans aim to prevent the distortion of the democratic process by financially powerful interests and to ensure a level playing field during elections.”29
The French Electoral Code stipulates that for the 6 months prior to the month of an election, commercial advertising for the purposes of election propaganda via the press or “any means of audiovisual communication” is prohibited.30 A stipulation such as this is contingent on clear definitions of online campaigning and political advertising; amendments to the French Electoral Code in 2018, for example, attempt to inhibit a broad range of political and issue advertisements by stipulating that the law applies to “information content relating to a debate of general interest,”31 rather than limiting the provision to ads that directly reference candidates, parties, or elections. In the French case, these provisions along with a number of transparency requirements discussed in the sections below, led some platforms, such as Twitter, to ban all political campaign ads and issue advocacy ads in France, a move that was later expanded into a global policy. Similarly, Microsoft banned all ads in France “containing content related to debate of general interest linked to an electoral campaign,” which is also now a global policy. Google banned all ads containing “informational content relating to a debate of general interest” between April and May 2019 across its platform in France, including YouTube.32 The French law led Twitter to initially block an attempt by the French Government’s information service to pay for sponsored tweets for a voter registration campaign in the lead-up to European parliamentary elections, though this position was eventually reversed.
The French ban on issue advertising on social media was legitimated by a parallel ban on political advertising via print or broadcast media. Other jurisdictions seeking to impose restrictions on social media advertising might similarly consider aligning those rules with the principles governing offline or traditional media advertising.
Some jurisdictions have opted to place responsibility on the entities that sell political advertisements, including social media companies, to enforce restrictions on advertising outside of the designated campaign period – both before the campaign period begins as well as during official silence periods in the day or days directly before the election.
Indonesia had some success calling on the platforms to enforce the three-day blackout period prior to its 2019 Elections. According to interlocutors, Bawaslu sent a letter to all of the platforms advising them that they would enforce criminal penalties should the platforms allow paid political advertising on their platforms during the designated blackout period. Despite responses from one or more of the platforms that the line between advertising in general and political advertising was too uncertain to enforce a strict ban, Bawaslu insisted that the platforms find a way to comply. The platforms in turn reported rejecting large numbers of advertisements during the blackout period. Bawaslu’s restrictions applied only to paid advertising, not organic posts.
Under India’s “Voluntary Code of Ethics for the 2019 General Election,” social media companies committed themselves to take down prohibited content within three hours during the 48-hour silence period before polling. The signatories to the Code of Ethics developed a notification mechanism through which the Election Commission could inform relevant platforms of potential violations of Section 126 of the Representation of the People Act, which bars political parties from advertising or broadcasting speeches or rallies during the silence period.
India and Indonesia are both very large markets, and most global social media companies have a physical presence in both countries. These factors significantly contribute to these countries’ abilities to compel platform compliance. This route is unlikely to be as effective in countries that do not have as credible a threat of legal sanction over the platforms or the ability to place penalties or restrictions on the platforms in a way that impacts their global business.
For countries that do attempt this route, as with restrictions on social media campaigning placed on domestic actors, restrictions that rely on the platforms for enforcement must also acknowledge the definitional distinctions between paid and unpaid content and between political and issue campaigning, for example, to have any enforceability. The Canadian framework acknowledges the complexity of enforcing campaign silence online by exempting content that was in place before the blackout period and has not been changed.33 Facebook’s decision to unilaterally institute a political advertising blackout period for the time period directly surrounding the 2020 U.S. Elections also limited political advertising to content already running on the platform. No ads containing new content could be placed. Moves to restrict paid advertising may advantage incumbents or other contestants that have had time to establish a social media audience in advance of the election; paid advertising is a critical tool that can allow new candidates to reach large audiences.
During the 2019 elections, the Election Commission of India required that paid online advertising that featured the names of political parties or candidates be vetted and pre-certified by the Election Commission. Platforms, in turn, were only allowed to run political advertisements that had been pre-certified.34
This measure only applied to a narrow band of political advertisements – any issue ads or third-party ads that avoid explicit mention of parties and candidates would not need to be pre-certified under these rules. For other countries, implementation of a pre-certification requirement would necessitate institutional capacity on par with Indian electoral authorities to make the vetting of all ads possible, as well as the market size and physical presence of company offices in-country to get the companies to comply.
Mongolia’s draft electoral laws would require political parties and candidates to register their websites and social media accounts. These draft laws would also block access to websites that run content by political actors that do not comply. The provision worded as such seems to penalize third-party websites for breaches committed by a contestant. Provisions further require that the comments function on official campaign websites and social media accounts should be disabled, and non-compliance with this provision incurs a fine.35 As the law is still in draft form, the enforceability of these measures has not been tested at the time of publication.
At present, social media platforms have differing policies on the ability of state-controlled news media to place paid advertising on their platforms. While platforms have largely adopted restrictions on foreign actors’ ability to place political advertising, some platforms still allow state-controlled media to pay to promote their content to foreign audiences more generally. Twitter has banned state-controlled media entities from placing paid advertising of any kind on their platform.36 For countries where Facebook’s Ad Library is being enforced, the advertiser verification process attempts to prohibit foreign actors from placing political advertising. However, Facebook does not currently restrict the ability of state-linked media to pay to promote their news content to foreign audiences, a tool that state actors use to build foreign audiences.
Analysis by the Stanford Internet Observatory demonstrates how Chinese state media uses social media advertising as a part of broader propaganda efforts and how such efforts were used to build a foreign audience for state-controlled traditional media outlets and social media accounts. The ability to reach this large audience was then used to deceptively shape favorable narratives about China during the coronavirus pandemic.
Prohibitions against foreign state-linked actors paying to promote their content to domestic audiences could be tied to other measures that attempt to bring transparency in political lobbying. For example, some experts in the U.S. propose applying the Foreign Agents Registration Act (FARA) to restrict the ability of foreign agents registered under FARA to advertise to American audiences on social media. This in turn requires a consistent and proactive effort on the part of U.S. authorities to require that state media is identified and registered as foreign agents. Rather than prohibit ads placed by known foreign agents, another option is to require platforms to label such ads to increase transparency. Several platforms have independently adopted such provisions,37 though enforcement has been inconsistent.
Another avenue being explored in larger markets is placing restrictions on the ways in which personal data can be used by platforms to target advertising. Platforms, to some degree, are adopting such measures in the absence of specific regulation. Google, for example, allows a narrower range of targeting criteria to be used to place election ads compared to other types of advertisements. Facebook does not limit the targeting of political ads, though they offer various tools to provide a degree of transparency for users on how they are being targeted. Facebook also allows users to opt-out of certain political ads, though these options are only available in the United States as of early 2021. Less well-understood are the tools used by streaming television services to target ads. It is unlikely that national-level regulation of this nature outside of the U.S. or EU will have the ability to alter the platforms’ policies. Further discussion on this topic can be found in the topical section on platform responses to disinformation.