2. Measures to restrict online content and behaviors

Updated On
Apr 01, 2021

Measures to restrict content or behaviors related to the use of social media or other digital technologies strive to bring campaign regulations up to date with the current information environment. In the absence of rules of the road, social media and digital technologies can be used in campaigns in blatantly deceptive and destructive ways with impunity. While not explicitly prohibited, some uses of social media and other digital technologies may contradict principles governing campaigning enshrined elsewhere in the electoral law.

i. Restrict content or behaviors: measures directed at domestic actors
a. Prohibit social media campaigning outside of a designated campaign period

Many countries delimit the timeframe of the campaign period. This may consist of, for example, a stipulation that campaign activities may only begin one or several months before Election Day. An electoral silence period of one or several days directly prior to Election Day during which certain campaign activities are prohibited also has wide global precedent. These provisions may apply very narrowly to candidates and political parties contesting the election or more broadly to political statements or advertisements placed by non-contestant campaigners, meaning third parties engaged in campaigning that are not themselves candidates or political parties. Some countries have extended these provisions to consider the political activity and advertising on social media, but many are either silent on the topic of social media or explicitly exempt from campaign regulations. 

Temporal restrictions on campaigning via social media are more likely to make an impact on the spread of disinformation when they are one part of a combination of measures intended to create rules and norms for the use of social media in campaigns. While disinformation tactics will continue to evolve, features of current online influence operations include the cultivation of online audiences, infiltration of existing online affinity networks, and the creation and growth of networks of coordinated accounts – processes that take time and, frequently, the investment of financial resources. Measures to temporally restrict the length of a campaign period combined with detailed stipulations about what activities constitute a campaign expenditure, for example, might inhibit domestic actors seeking to build a deceptive social media presence over the course of months or years that they plan to activate during the campaign period.

Extending existing laws that set time restrictions on campaign periods to also cover social media can be relatively straightforward. Argentina’s electoral laws, for example, indicate that television and radio advertising is limited to 35 days prior to the date set for the election and that campaigning via the internet or mobile technologies is only permissible during the campaign period (which starts 50 days before the Election Day and ends with the start of the electoral silence period 48 hours before elections).17 Resolution of the definitional considerations outlined in the section above – what constitutes digital media, online campaigning, and political advertising – is necessary to make the enforcement of restrictions on campaigning outside of the designated period predictable and proportionate. 

Unlike some of the newer or more hypothetical legal and regulatory approaches explored elsewhere in this section of the guidebook, the interpretation of prohibitions on social media use during campaign periods has significant judicial precedent. Notable cases include:18

  • In 2015, the High Chamber of the Federal Electoral Tribunal of Mexico ruled against a political party after a number of high-profile individuals tweeted in support of the party during the electoral silence period. The Tribunal determined that the coordination behind these actions, including the identification of paid intermediaries, constituted a part of the party’s propaganda strategy.
  • A 2010 ruling by the Superior Electoral Court of Brazil addresses an instance in which a Vice Presidential candidate tweeted in support of his Presidential running mate prior to the start of the campaign period. The court fined the candidate on the grounds that the tweet constituted illegal electoral propaganda. 
  • In two cases from 2012 and 2016, the High Chamber of the Federal Electoral Tribunal of Mexico ruled that candidates or pre-candidates posting to personal social media accounts outside of the campaign period were allowable if the content refrained from overt appeals for electoral support and was in the interest of free expression on issues of national interest.
  • The Supreme Court of Slovenia determined in 2016 that it was allowable to publish personal opinions during the electoral silence period, including via social media. The determination was made after a private citizen was fined for posting an interview with a candidate on Facebook during the silence period. 

For regulators considering these measures, it should be noted that restrictions on the activities of legitimate political actors can provide an advantage to malign actors that are not subject to domestic law. Ahead of the 2017 French presidential elections, for example, troves of hacked data from the campaign of Emmanuel Macron was posted online moments before the designated 24-hour silence period before Election Day during which media and campaigns are unable to discuss the election, leaving the campaign unable to respond publicly to the attack.

b. Restrict online behaviors that constitute an Abuse of State Resources

The extension of Abuse of State Resources (ASR) provisions to social media is a way in which regulation (paired with enforcement) can deter incumbents from using the resources of the state to spread disinformation for political advantage. As domestic actors increasingly adopt tactics pioneered by foreign state actors to manufacture and artificially amplify social media content in deceptive ways to buoy their domestic political prospects, tactics to deter domestic corruption may have application. 

The IFES ASR assessment framework recognizes Restrictions on Official Government Communications to the Public and Restrictions on State Personnel as two elements of a comprehensive ASR legal framework for elections. These are two clear areas where extending ASR provisions to social media has value. For example, restrictions to the messages that an incumbent candidate might disseminate via public media may be logically extended to restrictions on the use of official government social media accounts for campaigning. Additionally, restrictions on state personnel – for example, banning engagement in campaigns while on duty or an overarching mandate to maintain impartiality – may need to be explicitly updated to address the use of personal social media accounts.

In regard to ASR, potential questions to be investigated could include – how are the official social media accounts of government agencies being used during the campaign period? Are the accounts of government agencies engaged in coordination with partisan social media accounts to promote certain narratives? How are the accounts of state employees being used to promote political content? 

For incumbents seeking to use state resources to secure electoral advantage, the personal social media accounts of state personnel and the social media reach of official state agencies are attractive real estate in the mobilization of political narratives. In Serbia, for example, analysis by the Balkan Investigative Reporting Network alleges that the ruling party maintained a software system that logged the actions of hundreds of individuals’ social media accounts (many of those accounts belonging to state employees posting content during regular business hours) as they pushed party propaganda and disparaged political opponents ahead of 2020 elections. If true, these allegations would amount to a ruling party turning state employees into a troll army to wield against political opponents. 

Prior to the 2020 elections, the Anti-Corruption Agency of Serbia issued a statement that “Political subjects and bearers of public functions should responsibly use social networks and the Internet for the pre-election campaign since political promotion on Internet pages owned by the government bodies represents an abuse of public resources.”19 The agency noted that the increase in campaigning via social media as a result of COVID-19 social distancing restrictions brought particular attention to this issue.

Paragraphs

“The hardest thing is connecting bad actors to the government…. It’s not a grassroots problem; it’s an elite politics problem.” — Southeast Asian Civil Society Representative

Other actions that have been taken at the intersection of ASR and social media globally include a ruling by the Monterrey Regional Chamber of the Federal Electoral Tribunal of Mexico in 2015, which determined that in using a government vehicle to travel to polling stations with political candidates and posting about this activity via a Twitter account promoted on an official government webpage, a sitting governor violated the law. As a result, The court annulled the election, though a remedy this extreme is out of step with international good practice on when elections can or should be annulled.  

c. Set limits on the use of personal data by campaigns

Restrictions on the use of personal data by domestic political actors are one avenue some countries are exploring to block the dissemination and amplification of disinformation. Microtargeting, the use of user data to precisely target advertisements and messages to highly specific audiences, has received considerable attention. Microtargeting may enable legitimate political entities, as well as malign foreign and domestic actors, to narrowly tailor advertising to reach highly specific audiences in ways that can enable the opaque dissemination of misleading or otherwise problematic content. By limiting campaigns’ ability to use personal data, regulators may also limit their ability to divisively target advertisements to very narrow audiences.

In the United Kingdom, the UK Information Commissioner’s Office (ICO) launched an investigation in 2017 to look at the use of personal data for political purposes in response to allegations that an individual’s personal data was being used to micro-target political advertisements during the EU Referendum. The ICO fined the Leave.EU campaign and associated entities for improper data protection practices and investigated the Remain campaign on a similar basis.

While the use of data is included in this section on restricting content or behaviors, the topic also has transparency and equity implications. In their analysis of the regulation of online political microtargeting in Europe, academic Tom Dobber and colleagues note that a new Political Parties Act has been proposed in the Netherlands which “include[s] new transparency obligations for political parties with regard to digital political campaigns and political micro-targeting.”20 Dobber goes on to observe that “The costs of micro-targeting and the power of digital intermediaries are among the main risks to political parties. The costs of micro-targeting may give an unfair advantage to the larger and better-funded parties over the smaller parties. This unfair advantage worsens the inequality between rich and poor political parties and restrains the free flow of political ideas.”21

Limitations on the use of personal data for political campaigning are generally included in larger policy debates around data privacy and individual’s rights over their personal data. In Europe, for example, the EU’s General Data Protection Regulation (GDPR) places restrictions on political parties’ ability to buy personal data, and voter registration records are inaccessible in most countries.22 The subject of data privacy is explored further in the topical section on norms and standards.

d. Limit political advertising to entities that are registered for the election

Some jurisdictions limit the type of entities that are able to run political advertisements. Albanian electoral law, for example, stipulates that “only those electoral subjects registered for elections are entitled to broadcast political advertisements during the electoral period on private radio, television or audio-visual media, be they digital, cable, analog, satellite or any other form or method of signal transmission.”23 In Bowman v. the United Kingdom, the European Court of Human Rights ruled that it is acceptable for countries to place financial limitations on non-contestant campaigning that is in line with limits for contestants, though the court also ruled that unduly low spending limits on non-contestants create barriers to their ability to freely share political views, violating Article 10 of the Convention.24

Though candidates and parties may engage to various degrees in the dissemination of falsehoods and propaganda via their official campaigns, efforts intended to impact the information environment at scale will utilize unofficial accounts or networks of accounts to achieve their aims. Furthermore, such accounts are easily set up, controlled, or disguised to appear as though they are coming from extraterritorial locations, rendering national enforcement toothless.

In practice, measures to restrict advertisements run by a non-contestant would only be enforceable with compliance from social media companies – either through blanket restrictions on or pre-certification for political advertisements upheld by the platforms. Outside of a large market such as India or Indonesia, which have gained a degree of compliance from the platforms in enforcing such restrictions, this seems unlikely. The other route with the potential to make such a measure enforceable would be if the platforms complied with government user-data requests from national oversight bodies that would seek to enforce violations. This presents a host of concerns for selective enforcement and potential violation of user privacy, particularly in authoritarian environments where such data could be misused to target opponents or other dissidents.

e. Ban the distribution or creation of deepfakes for political purposes

Another legislative approach is to ban the use of deepfakes for political purposes. Several U.S. States have passed or proposed legislation to this effect, including Texas,  California, and Massachusetts. Updates to U.S. federal law in 2020 also require, among other things, the notification of the U.S. legislature by the executive branch in instances where foreign deepfake disinformation activities target US elections. Definitions of deepfakes in these pieces of legislation focus on an intent to deceive through highly realistic manipulation of audio or video using artificial intelligence.

It is conceivable that existing statutes related to identifying fraud, defamation, or consumer protection might cover the deceptive use of doctored videos and images for political purposes. One study reports that 96 percent of deepfakes involve the nonconsensual use of female celebrities’ images in pornography, suggesting that existing provisions related to identity fraud or non-consensual use of intimate imagery may also be applicable. Deepfakes are often used to discredit women candidates and public officials, so sanctioning the creation and/or distribution of deepfakes, or using existing legal provisions to prosecute the perpetrators of such acts, could have an impact on disinformation targeting women that serve in a public capacity.

f. Criminalize dissemination of fake news or disinformation

One common approach to regulation is the introduction of legal provisions that criminalize the disseminators or creators of disinformation or fake news. This is a worrisome trend as it has significant implications for freedom of expression and freedom of the press. As discussed in the definition section Why are definitions of fake news and disinformation problematic?, the extreme difficulty of arriving at clear definitions of prohibited behaviors can lead to unjustified restrictions and direct harms to human rights. Though some countries adopt such measures in recognition of and out of an attempt to mitigate the impact of disinformation on political and electoral processes, such provisions are also opportunistically adopted by regimes to stifle political opposition and muzzle the press. Even in countries where measures might be undertaken in a good faith attempt to protect democratic spaces, the potential for abuse and selective enforcement is significant. Governments have also passed a number of restrictive and emergency laws in the name of curbing COVID-related misinformation and disinformation with similarly chilling implications for fundamental freedoms. The Poynter Institute maintains a database of anti-misinformation laws with an analysis of their implications. 

Before adopting additional criminal penalties for the dissemination of disinformation, legislators and regulators should consider whether existing provisions in the criminal law such as those covering defamation, hate speech, identity theft, consumer protection, or the abuse of state resources are sufficient to address the harms that new criminal provisions attempt to address. If the existing criminal law framework is deemed insufficient, revisions to criminal law should be undertaken with caution and awareness of the potential for democratically damaging downstream results.

“If we want to fight hoaxes, it’s not through criminal law, which is too rigid.” — Indonesian Civil Society Representative

It should be noted that some attempts have been made to legislate against online gender-based violence, which sometimes falls into the category of disinformation. Scholars Kim Barker and Olga Jurasz consider this question in their book, Online Misogyny as Hate Crime: A Challenge for Legal Regulation?, where they conclude that existing legal frameworks have been unsuccessful in ending online abuse because they focus more on punishment after a crime is committed rather than on prevention.

ii. Restrict content or behaviors: Measures directed at social media and technology platforms

National legislation directed at social media and technology platforms is often undertaken in an attempt to increase domestic oversight over these powerful international actors who have little legal obligation to minimize the harms that stem from their products. Restrictions on content and behaviors that compel platform compliance can make companies liable for all of the content on their platforms, or more narrowly target only the paid advertising on their platforms. In this debate, platforms will argue, with some merit, that it is nearly impossible for them to screen billions of daily individual user posts. Conversely, it may be more reasonable to expect social media platforms to scrutinize paid advertising content.

As discussed in the section on domestic actors, some countries prohibit paid political advertising outside of the campaign period, some restrict paid political advertising altogether, while others limit the ability to place political advertisements only to entities that are registered for the election. In some instances, countries have called on social media companies to enforce these restrictions by making them liable for political advertisements on their platforms. 

Placing responsibility on the platforms to enforce national advertising restrictions also has the potential to create a barrier for political or issue advertisements placed by seemingly non-political actors or by unofficial accounts affiliated with political actors. However, if national regulators do take this approach, the difficulties of compliance with dozens if not hundreds of disparate national regulatory requirements are certain to be a point of contention with the companies. Like any other measure that places boundaries on permissible political expression, it also carries the potential for abuse.


The global conversation around platform regulations that would fundamentally alter the business practices of social media and technology companies – anti-trust or user data regimes, for example –are beyond the scope of this chapter. The focus instead is attempts at a national level to place enforceable obligations on the platforms that alter the way that they conduct themselves in a specific national jurisdiction. 

Often, the enforceability of country-specific regulations placed on the platforms will differ based on the perceived political or reputational risk associated with inaction in a country, which can be associated with market size, geopolitical significance, potential for electoral violence, or international visibility. That being said, some measures are more easily complied with in the sense that they do not require platforms to reconfigure their products in ways that have global ramifications and thus are more easily subject to national rule making. 

The ability of a country to compel action from the platforms can also be associated with whether the platforms have an office or legal presence in that country. This reality has spawned national laws requiring platforms to establish a local presence to respond to court orders and administrative proceedings. Germany has included a provision to this effect in their Interstate Media Treaty. Requirements to appoint local representatives that enable platforms to be sued in court become highly contentious in countries that lack adequate legal protections for user speech and where fears of censorship are well-founded. A controversial Turkish law went into effect on October 1, 2020 requiring companies to appoint a local representative accountable to local authorities’ orders to block content deemed offensive. U.S.-based social media companies have chosen not to comply at the urging of human rights groups, and face escalating fines and possible bandwidth restrictions that would throttle access to the platforms in Turkey in the case of continued non-compliance. This contrast illustrates that challenges social media platforms must navigate in complying with national law. Measures that constitute reasonable oversight in a country with robust protections for civil and political rights might serve as a mechanism for censorship in another.

At the same time, joint action grounded in international human rights norms could be one way for countries with less individual influence over the platforms to elevate their legitimate concerns. The Forum on Information and Democracy’s November 2020 Policy Framework articulates the challenge of harmonizing transparency requirements while preventing politically motivated abuse of national regulations. While joint action at the level of the European Union is occurring, the report points to the possibility of the Organization for American States, the African Union, the Asia-Pacific Economic Cooperation or Association of Southeast Asian Nations, or regional development banks as potential organizing forums for joint action in other regions.


 

a. Hold platforms liable for all content and require removal of content     

The debate over what content should be allowable on social media platforms is global in scope. Analysis on this topic is prolific and global consensus is unlikely to emerge given legitimate and differing definitions of the bounds that can and should be placed on speech and expression. Many of these measures that introduce liability for all content have hate speech as a central component. While hate speech is not limited to political or electoral periods, placing pressure on societal fault lines through the online amplification of hate speech is a common tactic used in political propaganda and by disinformation actors during electoral periods.

Some national jurisdictions have attempted to introduce varying degrees of platform responsibility for all the content hosted on their platforms, regardless of whether that is organic or paid content. 

The German Network Enforcement Act (NetzDG) requires social media companies to delete “manifestly unlawful” content within 24 hours of being notified. Other illegal content must be reviewed within seven days of being reported and deleted if found to be in violation of the law. Failure to comply carries up to a 5 million euro fine, though the law exempts providers who have fewer than 2 million users registered in Germany. The law does not actually create new categories of illegal content; its purpose is to require social media platforms to enforce 22 statutes on online content that already exist in the German code. It targets already-unlawful content such as “public incitement to crime,” “violation of intimate privacy by taking photographs,” defamation, “treasonous forgery,” forming criminal or terrorist organizations, and “dissemination of depictions of violence.” It also includes Germany’s well-known prohibition of glorification of Nazism and Holocaust denial. The takedown process does not require a court order or provide a clear appeals mechanism, relying on online platforms to make these determinations.25

The law has been criticized as being too broad and vague in its differentiation of “unlawful content” and “manifestly unlawful content.” Some critics also object to NetzDG as a “privatized enforcement” law because online platforms assess the legality of the content, rather than courts or other democratically legitimate institutions. It is also credited with inspiring a number of copycat laws in countries where the potential for censoring legitimate expression is high. As of late 2019, Foreign Policy identified 13 countries that had introduced similar laws; the majority of these countries were ranked as “not free” or “partly free” in Freedom House’s 2019 Freedom of the Internet assessment.26

France, which has pre-existing rules restricting hate speech, also introduced measures similar to those in Germany to govern content online. However, the French constitutional court overturned these measures in 2020, which similar to the German law would have required platforms to review and remove hateful content flagged by users within 24 hours or face fines. The court ruled that the provisions in the law would lead platforms to adopt an overly conservative attitude toward removing content in order to avoid fines, thus restricting legitimate expression.

The United Kingdom is another frequently cited example that illustrates various approaches to regulating harmful online content, including disinformation. A 2019 Online Harms White Paper outlining the UK government’s plan for online safety proposed placing a statutory duty of care on internet companies for the protection of their users, with oversight by an independent regulator. A public consultation period for the Online Harms Paper informed proposed legislation in 2020 that focuses on making the companies responsible for the systems they have in place to protect users from harmful content. Rather than require companies to remove specific pieces of content, the new framework would require the platforms to provide clear policies on the content and behavior that are acceptable on their sites and enforce these standards consistently and transparently. 

These approaches contrast with the Bulgarian framework, for example, which exempts social media platforms from editorial responsibility.27 Section 230 of the Communications Decency Act of the United States law also expressly releases social media platforms from vicarious liability.

Other laws have been proposed or enacted in countries around the globe that introduce some degree of liability or responsibility for platforms to moderate harmful content on their platforms. Broadly speaking, this category of regulatory response is the subject of fierce debate on the potential for censorship and abuse. The models in Germany, France, and the United Kingdom have frequently cited examples of attempts by consolidated democracies to more actively imposing a duty on platforms for the content they host while incorporating sufficient checks to protect freedom of expression – though measures in all three countries are also criticized for the ways they have attempted to strike this balance. These different approaches also illustrate how a proliferation of national laws introducing platform liability is poised to place a multitude of potentially contradictory obligations on social media companies. 

b. Prohibit platforms from hosting paid political advertising

Some jurisdictions prohibit paid campaign advertising in traditional media outright, with that ban extending or potentially extending to paid advertising on social media.28 “For decades, paid political advertising on television has been completely banned during elections in many European democracies. These political advertising bans aim to prevent the distortion of the democratic process by financially powerful interests and to ensure a level playing field during elections.”29 

The French Electoral Code stipulates that for the 6 months prior to the month of an election, commercial advertising for the purposes of election propaganda via the press or “any means of audiovisual communication” is prohibited.30 A stipulation such as this is contingent on clear definitions of online campaigning and political advertising; amendments to the French Electoral Code in 2018, for example, attempt to inhibit a broad range of political and issue advertisements by stipulating that the law applies to “information content relating to a debate of general interest,”31 rather than limiting the provision to ads that directly reference candidates, parties, or elections. In the French case, these provisions along with a number of transparency requirements discussed in the sections below, led some platforms, such as Twitter, to ban all political campaign ads and issue advocacy ads in France, a move that was later expanded into a global policy. Similarly, Microsoft banned all ads in France “containing content related to debate of general interest linked to an electoral campaign,” which is also now a global policy. Google banned all ads containing “informational content relating to a debate of general interest” between April and May 2019 across its platform in France, including YouTube.32 The French law led Twitter to initially block an attempt by the French Government’s information service to pay for sponsored tweets for a voter registration campaign in the lead-up to European parliamentary elections, though this position was eventually reversed.

The French ban on issue advertising on social media was legitimated by a parallel ban on political advertising via print or broadcast media. Other jurisdictions seeking to impose restrictions on social media advertising might similarly consider aligning those rules with the principles governing offline or traditional media advertising. 

c. Hold platforms responsible for enforcing restrictions on political advertisements run outside a designated campaign period 

Some jurisdictions have opted to place responsibility on the entities that sell political advertisements, including social media companies, to enforce restrictions on advertising outside of the designated campaign period – both before the campaign period begins as well as during official silence periods in the day or days directly before the election.

Indonesia had some success calling on the platforms to enforce the three-day blackout period prior to its 2019 Elections. According to interlocutors, Bawaslu sent a letter to all of the platforms advising them that they would enforce criminal penalties should the platforms allow paid political advertising on their platforms during the designated blackout period. Despite responses from one or more of the platforms that the line between advertising in general and political advertising was too uncertain to enforce a strict ban, Bawaslu insisted that the platforms find a way to comply. The platforms in turn reported rejecting large numbers of advertisements during the blackout period. Bawaslu’s restrictions applied only to paid advertising, not organic posts.

Under India’s “Voluntary Code of Ethics for the 2019 General Election,” social media companies committed themselves to take down prohibited content within three hours during the 48-hour silence period before polling. The signatories to the Code of Ethics developed a notification mechanism through which the Election Commission could inform relevant platforms of potential violations of Section 126 of the Representation of the People Act, which bars political parties from advertising or broadcasting speeches or rallies during the silence period.

India and Indonesia are both very large markets, and most global social media companies have a physical presence in both countries. These factors significantly contribute to these countries’ abilities to compel platform compliance. This route is unlikely to be as effective in countries that do not have as credible a threat of legal sanction over the platforms or the ability to place penalties or restrictions on the platforms in a way that impacts their global business.  

For countries that do attempt this route, as with restrictions on social media campaigning placed on domestic actors, restrictions that rely on the platforms for enforcement must also acknowledge the definitional distinctions between paid and unpaid content and between political and issue campaigning, for example, to have any enforceability.  The Canadian framework acknowledges the complexity of enforcing campaign silence online by exempting content that was in place before the blackout period and has not been changed.33 Facebook’s decision to unilaterally institute a political advertising blackout period for the time period directly surrounding the 2020 U.S. Elections also limited political advertising to content already running on the platform. No ads containing new content could be placed. Moves to restrict paid advertising may advantage incumbents or other contestants that have had time to establish a social media audience in advance of the election; paid advertising is a critical tool that can allow new candidates to reach large audiences.  

d. Only allow platforms to run pre-certified political advertisements

During the 2019 elections, the Election Commission of India required that paid online advertising that featured the names of political parties or candidates be vetted and pre-certified by the Election Commission. Platforms, in turn, were only allowed to run political advertisements that had been pre-certified.34 

This measure only applied to a narrow band of political advertisements – any issue ads or third-party ads that avoid explicit mention of parties and candidates would not need to be pre-certified under these rules. For other countries, implementation of a pre-certification requirement would necessitate institutional capacity on par with Indian electoral authorities to make the vetting of all ads possible, as well as the market size and physical presence of company offices in-country to get the companies to comply. 

Mongolia’s draft electoral laws would require political parties and candidates to register their websites and social media accounts. These draft laws would also block access to websites that run content by political actors that do not comply. The provision worded as such seems to penalize third-party websites for breaches committed by a contestant. Provisions further require that the comments function on official campaign websites and social media accounts should be disabled, and non-compliance with this provision incurs a fine.35 As the law is still in draft form, the enforceability of these measures has not been tested at the time of publication.

e. Obligate platforms to ban advertisements placed by state-linked media 

At present, social media platforms have differing policies on the ability of state-controlled news media to place paid advertising on their platforms. While platforms have largely adopted restrictions on foreign actors’ ability to place political advertising, some platforms still allow state-controlled media to pay to promote their content to foreign audiences more generally. Twitter has banned state-controlled media entities from placing paid advertising of any kind on their platform.36 For countries where Facebook’s Ad Library is being enforced, the advertiser verification process attempts to prohibit foreign actors from placing political advertising. However, Facebook does not currently restrict the ability of state-linked media to pay to promote their news content to foreign audiences, a tool that state actors use to build foreign audiences.

Analysis by the Stanford Internet Observatory demonstrates how Chinese state media uses social media advertising as a part of broader propaganda efforts and how such efforts were used to build a foreign audience for state-controlled traditional media outlets and social media accounts. The ability to reach this large audience was then used to deceptively shape favorable narratives about China during the coronavirus pandemic.

Prohibitions against foreign state-linked actors paying to promote their content to domestic audiences could be tied to other measures that attempt to bring transparency in political lobbying. For example, some experts in the U.S. propose applying the Foreign Agents Registration Act (FARA) to restrict the ability of foreign agents registered under FARA to advertise to American audiences on social media. This in turn requires a consistent and proactive effort on the part of U.S. authorities to require that state media is identified and registered as foreign agents. Rather than prohibit ads placed by known foreign agents, another option is to require platforms to label such ads to increase transparency. Several platforms have independently adopted such provisions,37 though enforcement has been inconsistent.

f. Restrict how platforms can target advertisements or use personal data

Another avenue being explored in larger markets is placing restrictions on the ways in which personal data can be used by platforms to target advertising. Platforms, to some degree, are adopting such measures in the absence of specific regulation. Google, for example, allows a narrower range of targeting criteria to be used to place election ads compared to other types of advertisements. Facebook does not limit the targeting of political ads, though they offer various tools to provide a degree of transparency for users on how they are being targeted. Facebook also allows users to opt-out of certain political ads, though these options are only available in the United States as of early 2021. Less well-understood are the tools used by streaming television services to target ads. It is unlikely that national-level regulation of this nature outside of the U.S. or EU will have the ability to alter the platforms’ policies. Further discussion on this topic can be found in the topical section on platform responses to disinformation.

Footnotes

17. Law on the Financing of Political Parties, n ° 26215 (amended 2019): art. 64.

18. These cases, among others, are compiled in Jose Luis Vargas Valdez, “Study on the Role of Social Media and the Internet in Democratic Development,” European Commission for Democracy Through Law (2018): app. A. Full judgments and case summaries are available at ElectionJudgements.org. 

19. “V.I.P. Daily News Report” V.I.P News Services, no. 6866, June 2, 2020.

20. Tom Dobber, Ronan Ó Fathaigh and Frederik J. Zuiderveen Borgesius, “The regulation of online political micro-targeting in Europe,” Internet Policy Review (2019): 12.

21. Ibid., 4.

22. The European Commission has issued a guidance document on the application of EU data protection legislation in the electoral context.

23. The Electoral Code of the Republic of Albania, n ° 10 019 (amended 2015): art. 84 (4).

24. para.  47. 

25. This paragraph draws on analysis from the Library of Congress and the Washington Post.

26. The 13 countries identified were Venezuela, Vietnam, Russia and Belarus, Honduras, Kenya, India, Singapore, Malaysia, the Philippines, France, the United Kingdom, and Australia.

27. Electoral Code of Bulgaria (2014): Additional Provisions, § 1(15-16).

28. The International IDEA Political Finance Database lists the countries that uphold an absolute ban on paid political advertising. 

29. Dobber, Fathaigh and Zuiderveen BorgesiusThe regulation of online political micro-targeting in Europe,” 2.

30. Art. L52-1. 

31. Art. L. 163-1, 2. 

32. Dobber, Fathaigh and Zuiderveen BorgesiusThe regulation of online political micro-targeting in Europe,” 11.

33. Canada Elections Act, art. 324 (a).

34. Voluntary Code of Ethics for the 2019 General Election, Commitment 5.

35. “ODIHR Opinion on Draft Laws of Mongolia on Presidential, Parliamentary and Local Elections” OSCE Office for Democratic Institutions and Human Rights, November 25, 2019: 10-11. 

36. This policy emerged in response to criticism Twitter received after Chinese state media placed ads on the platform to discredit pro-democracy protests in Hong Kong. 

37. Including YouTube, Twitter, and Facebook