Complete Document - Legal and Regulatory Responses

Written by Lisa Reppell, Global Social Media and Disinformation Specialist at the International Foundation for Electoral Systems Center for Applied Research and Learning

The legal and regulatory frameworks governing elections vary significantly in how comprehensively they have adapted to the widespread use of the internet and social media in campaigning. While lawmakers in some countries have made strides to bring their legal and regulatory frameworks in step with an evolving information environment, other frameworks are largely silent on the topic of digital media. As the tactics of social media and technology-enabled information operations are increasingly adopted by political actors as standard campaign practices, the absence of legal and regulatory guidance that sets bounds on permissible campaigning behaviors becomes increasingly problematic.

Carefully crafted laws and regulations can inhibit political actors from using disinformation and other harmful or deceptive online practices for personal and political gain in ways that erode the health of the democratic information environment. At the same time, the adoption of overly broad legislation can have chilling implications for political and electoral rights. While legal and regulatory reform to adapt to the ways social media and technology have changed elections is essential, grounding that reform in comparative, global good practice can aid regulators in considering the challenges of regulating this area.

Though most countries have established norms and rules to govern the flow of information via print and broadcast media during campaigns and elections, the democratic principles that inform these laws and regulations – freedom of expression, transparency, equity, and the promotion of democratic information – have not been consistently extended to social media and online campaigning. Regulation, however, must do more than simply extend existing media oversight mechanisms to the digital world. Social media and the internet have altered the ways in which individuals encounter, interact with, and create political and electoral information, requiring lawmakers and regulators to adopt approaches consistent with this changed reality.

“[Our organization] look[s] at media content in the run up to elections – we look at print, broadcast, traditional media – all of which are clearly covered by electoral guidelines and processes. If we see something on radio or TV, there would be a means of recourse in our electoral commission to deal with that appropriately. What we saw with digital media… [it wasn’t] covered by anything or anyone. It was a huge gap.” — William Bird, Director of Media Monitoring Africa (South Africa)

Highlight


Regulation that would change the behavior of foreign adversaries or significantly alter the global business practices of social media platforms is unrealistic goals for legal reform processes at the national level.1 However, regulating the actions of domestic actors during electoral periods or discrete laws that create pressure on the ways platforms operate within a country are viable areas for reform. Such regulation also builds on the existing mandate of regulatory or judicial bodies to oversee the behavior of domestic actors during elections, including candidates and political parties.

National legislation governing the use of digital media during elections and campaigns has the potential to close loopholes currently being exploited by domestic actors to manipulate the information environment around elections. The use of disinformation for a political advantage during campaign periods constitutes more than the dissemination of false or misleading information. Disinformation campaigns are often directed by actors that leverage deceptive and coordinated behaviors online to distort public understanding, heighten social polarization and undermine trust in elections and democratic institutions. These campaigns are supercharged by the nature, scale, and networked capacity of new online systems and can have an outsized impact on the political participation, societal perception, and safety of women and other marginalized groups. To construct a network that deploys disinformation at scale often requires financial resources to not only develop and test messages but also finance the amplification of those messages. In the absence of specific political and campaign finance guidelines for the use of social media in campaigning, few limitations exist on what behaviors are permissible, even in instances where those behaviors would seem to constitute a clear violation of principles that exist elsewhere in the law.

Some countries are developing novel approaches for dealing with the use of social media in campaigns and elections, sometimes in ways that lack international precedent. The intent of this topical section is to detail, categorize, and discuss the implications of these emerging national-level legal, regulatory, and judicial decisions. This section draws from an analysis of the electoral legal frameworks of more than forty countries across six continents. Many of the laws and policies collected in this topical section have not yet been tested extensively in election contexts, so it may not always be clear which will succeed in advancing their intended goals.

Explore: Definitions, Comparative Examples, and Enforcement Considerations

This section of the guidebook is intended to be a resource for lawmakers contemplating the regulation of digital and social media in their own electoral legal frameworks, as well as for international donors and implementers that may be providing comparative examples in the process.

  1. DEFINITIONS: The content in this section begins with a discussion of key definitional considerations that lawmakers must address in the regulation of social media during elections and campaigns, as well as examples of how different countries have chosen to define these concepts. Depending on how these concepts are defined, they have the potential to significantly alter the scope and enforceability of law. 

  2. COMPARATIVE EXAMPLES: The text then proceeds with comparative examples and analysis of measures taken in national-level law, regulation, and jurisprudence. It looks at measures to restrict online content and behaviors during campaigning and elections, as well as measures to promote transparency, equity, and democratic information. The examples that are included can be explored individually according to interest and need not be read consecutively. Examples are intended to provide comparative perspectives to inform legal and regulatory reform discussions, though the inclusion of an example does not constitute an endorsement of that approach.

  3. ENFORCEMENT: Thoughtful regulation means little if it is not accompanied by meaningful consideration of how that regulation will be enforced. A lack of realism about enforcement threatens to undercut the authority of the regulatory bodies enacting reforms and may establish unrealistic expectations of what is achievable through regulation alone.

1.1 What Constitutes Social or Digital Media?

The online media environment continues to evolve, and regulations that are crafted today to address specific elements of that environment may quickly become out of date. Regulators must consider the full range of internet-enabled communication tools to determine how broadly or narrowly to craft their guidance.

Writing in 2014, International IDEA defined social media as “web or mobile-based platforms that allow for two-way interactions through user-generated content (UGC) and communication. Social media are therefore not media that originate only from one source or are broadcast from a static website. Rather, they are media on specific platforms designed to allow users to create (‘generate’) content and to interact with the information and its source.”

In the intervening years, the social web has continued to evolve and definitions such as the above may no longer sufficiently capture the range of online activities that regulators wish to address. Analysis by the Knight First Amendment Institute at Columbia University of the 100 most popular social media platforms articulates the definitional complexities of classifying social media.  Capturing campaign activity that takes place on digital messaging applications, such as WhatsApp, Telegram, or Signal, or on subculture internet forums, for example, may require a broader definition than the one above. The role of search engines, online advertisement distributors, or ad-based streaming internet television in campaigning may also require greater definitional breadth.

Germany’s 2020 Interstate Media Treaty (Medienstaatsvertrag – “MStV”) Law provides one of the more comprehensive definitions of the range of activities it seeks to govern. The law introduces “comprehensive media-specific regulations… for those providers that act as gatekeepers for media content or services to disseminate it” such as "search engines, smart TVs, language assistants, app stores, [and] social media." The law attempts to provide detailed definitions under the categories of media platforms, user interfaces, and media intermediaries.

Rather than referring to social or digital media, the electoral code of Canada refers to “online platforms,” defining them based on the salient feature being regulated by the code, namely that they sell advertising. Canadian law defines an online platform as “an Internet site or Internet application whose owner or operator, in the course of their commercial activities, sells, directly or indirectly, advertising space on the site or application to persons or groups.”2

Other jurisdictions will further restrict the social media or online platforms obligated to comply with a new law or regulation based on a specific criterion, such as the number of users. Germany’s Netzwerkdurchsetzungsgesetz (NetzDG) law, for example, which requires companies to expeditiously remove illegal content from their platforms, applies only to internet platforms with at least 2 million users.3

 

In defining social media or digital media, drafters will want to consider:

  • What array of online behavior does this law address? Does it include all websites that allow paid advertising or public comments, such as online news sites or blogs? Does it apply to digital messaging applications (i.e. WhatsApp)? Search engines? Internet advertising distributors? 
  • Is the intent of the law purely to regulate online paid activities taking place on social media? If so, should the definition be focused on online entities that run paid advertising? 
  • Are the obligations created in this law too burdensome for small social media companies in ways that will stifle competition due to the high costs of compliance? As such, should the law be limited to platforms that exceed a certain number of daily users or have a certain amount of revenue or market value?

1.2 What is online campaigning? (organic vs. paid content)

Legal and regulatory frameworks may wish to distinguish between “organic” and “paid” activities undertaken by the actor being regulated. Organic campaign content, for example, would be material shared by a party or candidate with their established social media audience who may or may not engage with or further disseminate that material. The reach of organic content is determined by the size of a candidate or campaign’s social media audience – i.e. those entities that have chosen to follow or engage with the social media actor in question – as well as the quality and appeal of the content that is being shared. 

“Paid content” on the other hand is material for which the actor being regulated has paid to bring added visibility among audiences that may not have chosen to engage with that material. Different social media and digital platforms have different paid features to expand the reach of content, including but not limited to, the placement of advertisements or payment to prioritize content in users’ social media feeds or search engine results. If a party pays for the development of campaign messages or materials, even if they are then distributed through organic channels, that too may qualify as an expense that must be reported, as discussed in the following definitional section on “What constitutes a digital or social media advertising expenditure?”

This distinction is particularly pertinent in instances where there are restrictions on campaigning outside of a designated period. For example, a clear definition is needed to delineate what online behaviors are permissible before a campaign period begins or during an electoral silence period directly prior to the election.

Regulators in different countries have chosen to answer this question in different ways, with some determining that both paid and unpaid social media content constitutes online campaigning, while others determine that regulation pertains only to paid advertising. 

Highlight


In deciding where to delineate the boundaries of online campaigning, regulators might consider whether their primary intent is to regulate the activities of candidates and parties' official pages or accounts or whether they wish to regulate the activity of any social media user engaging in campaigning. If the objective is to govern the official social media accounts of candidates and parties, monitoring all of the posts and activity of these accounts - paid and unpaid - is a more achievable goal given that only a discrete number of accounts will need to be monitored for compliance.

On the other hand, if regulation aims to impact all social media users posting political content, rather than just the official accounts of parties and campaigns, monitoring all organic posts from every social media user becomes impractical and at risk of selective or partisan enforcement. Focusing on paid advertising, particularly in countries where social media platforms’ ad transparency reporting exists, makes oversight of all paid political advertising a more realistic goal.

Venezuela’s electoral legal framework, for example, stipulates that unpaid political expression on social media by candidates or parties is not considered campaigning.4 The Canadian framework acknowledges the complexity of enforcing campaign silence online by exempting “the transmission of a message that was transmitted to the public on what is commonly known as the Internet before the blackout period … that was not changed during that period.”5 Similarly, 2010 guidelines from the National Electoral Commission of Poland prohibit any online activity that constitutes campaigning during the election silence period but allows content that was posted online before the start of the silence period to remain visible.6

In defining online campaigning, regulators will want to consider:

  • Do they wish to distinguish between content that is disseminated through paid and unpaid means? 
  • Is only content shared by parties and candidates subject to regulation, or do stipulations pertain to a broader array of internet users that may post political content or purchase political or issue advertisements?
  • What is the regulatory body’s capacity to monitor and enforce campaign violations, and does this impact how narrowly or broadly online campaigning is defined?

1.3  How does the law define political advertising, campaign advertising, and issue advertising?

Domestic law may take a broad or narrow approach to define the types of advertising that are subject to scrutiny. Clearly delineating the criteria by which online paid advertisements will be deemed to fall into a regulated category is essential for any regulation that, for example, attempts to place guardrails around permissible political advertising or requires specific disclosures related to online political advertising. 

Electoral codes and social media platforms use varying definitions for “political advertising,” “campaign advertising,” “election advertising,” and “issue advertising.” These phrases do not have universal definitions, and establishing the definitional distinctions among these concepts is a familiar challenge from the regulation of offline campaigning as well. For both online and offline campaigning, subtle distinctions within these definitions can significantly alter the scope and impact of a law. 

For countries that have designated campaign periods, “campaign advertising” and “campaign finance” are terms used to delineate activities and expenditures that occur during that designated period, while “political advertising” and “political finance” would include the activities and expenditures of a party that take place outside of the campaign period or relate to the general operations of the party. 

For the purposes of this section of the guidebook, “political advertising” will be used as an overarching term to refer to advertising that is placed by political parties, candidates, or third parties acting on their behalf, as well as any advertisements (regardless of who has placed the ad) that explicitly reference a political party, candidate, or election or that encourages a particular electoral choice. “Campaign advertising” will only be used when referencing measures that apply specifically to a designated campaign period. 

The distinction is important, as some party expenditures – for example the placement of advertisements that serve a voter education purpose – might be considered political advertisements or campaign advertisements depending on the definitions used. If the definitions are indistinct, candidates and parties that conduct voter education outside of the campaign period may argue such messages are part of their normal course of business and not part of a campaign, opening a pathway for parties to circumvent campaign regulations.7 

The phrase “issue advertising” is used in this section to capture a wider array of advertisements that reference social or political issues but do not explicitly reference a party, candidate, or election. Issue advertisements can be placed by any entity, whether they are expressly political or not. Countries that subject a broader array of online issue advertising to regulation may choose to do so in order to deter clandestine advertising with political, social, or financial goals, but which do not specifically name candidates or parties in an attempt to skirt regulation. A broad definition significantly expands the array of advertising that must then be subject to rules or review. Facebook notes that for countries tracking issue advertisements, these can come from an array of advertisers including “activists, brands, non-profit groups, and political organizations.” 

Attempts to regulate issue advertisements also raise freedom of expression considerations for civil society and advocacy groups. In Ireland for example, regulated activities include those “…to promote or oppose, directly or indirectly, the interests of a third party in connection with the conduct or management of any campaign conducted with a view to promoting or procuring a particular outcome in relation to a policy or policies or functions of the Government or any public authority.”8 The debate over this provision highlighted concerns that such a broad definition could impact the advocacy and campaigning work of civil society organizations.9

New Zealand and Canada have also crafted sufficiently broad definitions of election advertising to make possible the sanction of online political advertising disguised as issue-based advertising.

  • New Zealand10 
    • In this Act, election advertisement— 
      • (a) means an advertisement in any medium that may reasonably be regarded as encouraging or persuading voters to do either or both of the following: 
      • (i) to vote, or not to vote, for a type of candidate described or indicated by reference to views or positions that are, or are not, held or taken (whether or not the name of the candidate is stated): 
      • (ii) to vote, or not to vote, for a type of party described or indicated by reference to views or positions that are, or are not, held or taken (whether or not the name of the party is stated);
  • Canada11 
    • Election advertising means the transmission to the public by any means during an election period of an advertising message that promotes or opposes a registered party or the election of a candidate, including by taking a position on an issue with which a registered party or candidate is associated.

Both New Zealand and Canada’s definitions further distinguish election advertising from an editorial or opinion content.

Whether national law provides that advertisements about political or social issues are subject to additional transparency or oversight measures may impact the information that is collected and cataloged by Facebook, and possibly by other online platforms. For example, as of early 2021, Facebook captured a greater range of advertisements in its Ad Library for Canada, the European Union, Singapore, Taiwan, the United Kingdom, and the United States than for other countries.12 Among the 34 countries that gained access to the Facebook Ad Library in July and August of 2020, only New Zealand and Myanmar required added disclosure for social issue advertising in addition to political and electoral ads (which applied for all of the remaining countries). In New Zealand’s case, this may have been in response to a national-level legal provision requiring broader disclosures from the platform related to issue advertising, though Myanmar’s legal code is silent on the topic.

In defining political, campaign, election or issue advertising, regulators will want to consider:

  • Are there definitions of political, campaign, electoral, or issue advertising in the current electoral legal framework? If so, do they apply to social media advertising?
  • If there is no definition, or it does not apply to social media, or it includes a narrow definition of political advertising, would it be beneficial to expand or revise the definition?
  • Does the legal framework require activists, brands, non-profit groups and political organizations to disclose issue advertisements?
  • In each instance, is this a reasonable burden to place on these entities that will not suppress their ability to reach intended audiences due to overly-onerous requirements?
  • What are the implications of any proposed changes on freedom of expression, particularly for civil society organizations engaged in advocacy?

1.4 Who are the payers and paid entities in online campaigning?

If regulators are attempting to use existing legal mechanisms at their disposal – including the legal framework regulating political finance, public corruption, or the use of state resources – then definitions that acknowledge the complexity of the information ecosystem need to be considered. The creation of disinformation at scale by a domestic or foreign actor will likely necessitate the outlay of funds to secure the personnel, expertise, and materials needed to create and maintain a sustained online campaign. Regulation that seeks to bring transparency through disclosure requirements or regulate paid campaign activities must therefore acknowledge the multitude of financial relationships that might constitute an expenditure.

Social MediaDigital and social media campaigning increases the opportunities to obscure the origins of content by acting through third parties. Measures that seek to bring transparency into these financial flows will want to consider not only who is the payer and beneficiary, but also who is the paid entity -- the social media platform itself? Influencers who operate pages or feeds on respective platforms and may be paid to promote political content? Public employees, who engage in campaigning via social media while at work? Public relations firms or content creation entities (such as content farms or troll farms) that produce and disseminate content on behalf of a political entity?

Additionally, are those entities operating from within the country or extraterritorially? 

Canada, for instance, exempts social media posts from its definition of “advertising” if it falls within the following parameters: “the transmission by an individual, on a non-commercial basis on the Internet, of his or her personal political views” (emphasis added).13 This can be interpreted to require the payment of social media intermediaries or influencers by political entities to be disclosed as advertising. Without this consideration, candidates and political parties can circumvent regulations by paying third-party entities to promote content or place advertisements on their behalf. The nature of social media makes it comparatively easy for a political entity to engage the services of a third party to perform otherwise regulated or prohibited activities on social media while circumventing disclosure requirements. Laws should include clear definitions of terms to capture this reality and close loopholes.

Conversely, measures that sanction or place obligations on the disseminators of unlawful content – without seeking to identify the funders of that content – are unlikely to deter the actors that are the ultimate beneficiaries of disinformation campaigns.

In defining who the payer and paid entities are, regulators will want to consider:

  • If a certain action is prohibited or subject to disclosure requirements, does the legal and regulatory framework also apply to the hiring or instruction of third parties to perform that action?
  • How does the legal or regulatory provision under consideration impact the disseminator of content versus the funder of the activity?

There is no regulation to catch the funder, only the one who spread[s the content].” — Indonesian Civil Society Representative

1.5 What constitutes a digital or social media advertising expenditure?

If a legal or regulatory approach includes disclosure or transparency requirements, it is important to define the types of expenditures on digital advertisements or digital campaigns that must be disclosed. These requirements may also need to be reviewed at regular intervals to ensure that they suit the rapidly evolving tactics of digital campaigning.

Robust disclosure requirements will provide insights into the sources of funding, the amount of funding provided by each source, and detailed information on how funding was used. Full disclosure is necessary to make it possible to judge if funds are coming from legally allowable sources and being used for legitimate party and campaign purposes. Minimal disclosure requirements make it easy for political actors to comply with the letter of the law while concealing questionable behaviors that violate the intent of disclosure requirements.

Analysis by the UK Electoral Commission notes that digital advertising expenditures can be easily hidden under different reporting categories. The Commission notes that they are unable to capture an accurate picture of how much has been spent on social media advertising because data is limited to payments made directly by the reporting entity to identifiable social media providers, such as Facebook or YouTube. This does not account for the reality that a significant amount of digital spending happens via consultancies or intermediary advertising agencies.14 For example, the Labour Party reported digital advertising expenditures of £16,000 in the 2015 Parliamentary Elections in the UK, when later calculations showed the total to be closer to £130,000 via intermediary advertising agencies. Practices such as this led the Electoral Commission to conclude that more detailed expenditure requirements were needed.15

In defining what information to include in disclosure requirements, regulators will want to consider:

  • What constitutes an expenditure? For example:
    • Only the cost to place an ad? 
    • The payment of digital advertising or public relations firms to design and deploy ad campaigns? 
    • The cost to produce an ad? 
    • The cost to profile target audiences? 
    • The cost to develop and deploy chatbots (or other bots) to engage with users on social media platforms? 
    • The direct or third-party payment of content (or troll) farms to disseminate designated social media content or messages in large numbers? 
    • The cost of obtaining influencer endorsements?

1.6 Is there a timeframe during which expenditures must be disclosed?

For countries that have defined campaign periods outlined in law or regulation, a loophole opens if regulators require detailed disclosure of social media advertising expenditure only during the campaign period. Though such spending may still be captured in regular party financial reporting, figures might only be captured annually and, depending on reporting requirements, may contain less detail than what may be required during campaign periods. Additionally, whether an expense is defined as an agreement to make a payment or a payment itself can impact reporting. If imprecisely defined, a political contestant could, for example, delay payment to a social media intermediary until after Election Day to skirt reporting requirements.

 

In defining a timeframe for disclosure, regulators will want to consider:

  • How are disclosure requirements already outlined in the law for traditional media or political finance? 
  • Is the timeline crafted in a way that aligns with when digital or social media expenditures are likely to take place in the electoral cycle? For example, the cost to profile target audiences or pay for an influencer endorsement could occur well in advance of the electoral event, or payment could happen after Election Day as a way to avoid disclosure requirements that cover only the immediate campaign period.

Campaign finance limits to expenditure only apply during the campaign period – but there are campaign expenditures also outside of the official campaign period... We have to redefine campaign finance coverage to be more comprehensive. — Indonesian Civil Society Representative

1.7 Why are definitions of fake news and disinformation problematic?

Legal and regulatory interventions that attempt to ban or sanction “fake news” or disinformation have been widespread in recent years. However, the difficulty defining these terms is one of the reasons such measures are frequently criticized by those who fear their implications for fundamental rights. As discussed in the introduction to this guidebook, precise definitions of disinformation are elusive, and what is commonly referred to as disinformation encompasses a wide range of deceptive and problematic behaviors.

If the success of a legal or regulatory intervention relies on a precise, comprehensive, and universally applicable definition of “fake news,” “false information,” “misinformation,” “disinformation,” or a similar term, it is likely that the intervention will either result in collateral damage to freedom of expression or be too vague to be reliably enforceable. It also holds a high risk of being selectively enforced, for example, against political opponents or to restrict press freedoms. 

Some jurisdictions have chosen to leave the issue of determining what content constitutes “fake news” to judicial review. French law, for example, stipulates that whether an item is “fake news,” and thus subject to removal or containment, is up to the determination of a judge. The ruling shall be made according to three criteria: the fake news must be manifest, disseminated deliberately on a massive scale, and lead to a disturbance of the peace or compromise the outcome of an election.16 The proportionate application of such a law is dependent on an independent judiciary insulated from political pressure, well-trained judges capable of understanding the digital information ecosystem, and a well-resourced judiciary capable of expediting the review of such claims, including any appeals.

Lawmakers and regulators should consider the range of approaches outlined in this text before resorting to the blunt-force instrument of a ban on or criminalization of fake news or disinformation. In instances where content and speech circulated on social media run afoul of existing criminal law, the referral of violating content for investigation and prosecution under such existing provisions – such as those covering defamation, hate speech, fraud, or identity theft -- is recommended over the adoption of additional criminal sanctions for the dissemination fake news or disinformation.

 

Measures to restrict content or behaviors related to the use of social media or other digital technologies strive to bring campaign regulations up to date with the current information environment. In the absence of rules of the road, social media and digital technologies can be used in campaigns in blatantly deceptive and destructive ways with impunity. While not explicitly prohibited, some uses of social media and other digital technologies may contradict principles governing campaigning enshrined elsewhere in the electoral law.

i. Restrict content or behaviors: measures directed at domestic actors
a. Prohibit social media campaigning outside of a designated campaign period

Many countries delimit the timeframe of the campaign period. This may consist of, for example, a stipulation that campaign activities may only begin one or several months before Election Day. An electoral silence period of one or several days directly prior to Election Day during which certain campaign activities are prohibited also has wide global precedent. These provisions may apply very narrowly to candidates and political parties contesting the election or more broadly to political statements or advertisements placed by non-contestant campaigners, meaning third parties engaged in campaigning that are not themselves candidates or political parties. Some countries have extended these provisions to consider the political activity and advertising on social media, but many are either silent on the topic of social media or explicitly exempt from campaign regulations. 

Temporal restrictions on campaigning via social media are more likely to make an impact on the spread of disinformation when they are one part of a combination of measures intended to create rules and norms for the use of social media in campaigns. While disinformation tactics will continue to evolve, features of current online influence operations include the cultivation of online audiences, infiltration of existing online affinity networks, and the creation and growth of networks of coordinated accounts – processes that take time and, frequently, the investment of financial resources. Measures to temporally restrict the length of a campaign period combined with detailed stipulations about what activities constitute a campaign expenditure, for example, might inhibit domestic actors seeking to build a deceptive social media presence over the course of months or years that they plan to activate during the campaign period.

Extending existing laws that set time restrictions on campaign periods to also cover social media can be relatively straightforward. Argentina’s electoral laws, for example, indicate that television and radio advertising is limited to 35 days prior to the date set for the election and that campaigning via the internet or mobile technologies is only permissible during the campaign period (which starts 50 days before the Election Day and ends with the start of the electoral silence period 48 hours before elections).17 Resolution of the definitional considerations outlined in the section above – what constitutes digital media, online campaigning, and political advertising – is necessary to make the enforcement of restrictions on campaigning outside of the designated period predictable and proportionate. 

Unlike some of the newer or more hypothetical legal and regulatory approaches explored elsewhere in this section of the guidebook, the interpretation of prohibitions on social media use during campaign periods has significant judicial precedent. Notable cases include:18

  • In 2015, the High Chamber of the Federal Electoral Tribunal of Mexico ruled against a political party after a number of high-profile individuals tweeted in support of the party during the electoral silence period. The Tribunal determined that the coordination behind these actions, including the identification of paid intermediaries, constituted a part of the party’s propaganda strategy.
  • A 2010 ruling by the Superior Electoral Court of Brazil addresses an instance in which a Vice Presidential candidate tweeted in support of his Presidential running mate prior to the start of the campaign period. The court fined the candidate on the grounds that the tweet constituted illegal electoral propaganda. 
  • In two cases from 2012 and 2016, the High Chamber of the Federal Electoral Tribunal of Mexico ruled that candidates or pre-candidates posting to personal social media accounts outside of the campaign period were allowable if the content refrained from overt appeals for electoral support and was in the interest of free expression on issues of national interest.
  • The Supreme Court of Slovenia determined in 2016 that it was allowable to publish personal opinions during the electoral silence period, including via social media. The determination was made after a private citizen was fined for posting an interview with a candidate on Facebook during the silence period. 

For regulators considering these measures, it should be noted that restrictions on the activities of legitimate political actors can provide an advantage to malign actors that are not subject to domestic law. Ahead of the 2017 French presidential elections, for example, troves of hacked data from the campaign of Emmanuel Macron was posted online moments before the designated 24-hour silence period before Election Day during which media and campaigns are unable to discuss the election, leaving the campaign unable to respond publicly to the attack.

b. Restrict online behaviors that constitute an Abuse of State Resources

The extension of Abuse of State Resources (ASR) provisions to social media is a way in which regulation (paired with enforcement) can deter incumbents from using the resources of the state to spread disinformation for political advantage. As domestic actors increasingly adopt tactics pioneered by foreign state actors to manufacture and artificially amplify social media content in deceptive ways to buoy their domestic political prospects, tactics to deter domestic corruption may have application. 

The IFES ASR assessment framework recognizes Restrictions on Official Government Communications to the Public and Restrictions on State Personnel as two elements of a comprehensive ASR legal framework for elections. These are two clear areas where extending ASR provisions to social media has value. For example, restrictions to the messages that an incumbent candidate might disseminate via public media may be logically extended to restrictions on the use of official government social media accounts for campaigning. Additionally, restrictions on state personnel – for example, banning engagement in campaigns while on duty or an overarching mandate to maintain impartiality – may need to be explicitly updated to address the use of personal social media accounts.

In regard to ASR, potential questions to be investigated could include – how are the official social media accounts of government agencies being used during the campaign period? Are the accounts of government agencies engaged in coordination with partisan social media accounts to promote certain narratives? How are the accounts of state employees being used to promote political content? 

For incumbents seeking to use state resources to secure electoral advantage, the personal social media accounts of state personnel and the social media reach of official state agencies are attractive real estate in the mobilization of political narratives. In Serbia, for example, analysis by the Balkan Investigative Reporting Network alleges that the ruling party maintained a software system that logged the actions of hundreds of individuals’ social media accounts (many of those accounts belonging to state employees posting content during regular business hours) as they pushed party propaganda and disparaged political opponents ahead of 2020 elections. If true, these allegations would amount to a ruling party turning state employees into a troll army to wield against political opponents. 

Prior to the 2020 elections, the Anti-Corruption Agency of Serbia issued a statement that “Political subjects and bearers of public functions should responsibly use social networks and the Internet for the pre-election campaign since political promotion on Internet pages owned by the government bodies represents an abuse of public resources.”19 The agency noted that the increase in campaigning via social media as a result of COVID-19 social distancing restrictions brought particular attention to this issue.

“The hardest thing is connecting bad actors to the government…. It’s not a grassroots problem; it’s an elite politics problem.” — Southeast Asian Civil Society Representative

Other actions that have been taken at the intersection of ASR and social media globally include a ruling by the Monterrey Regional Chamber of the Federal Electoral Tribunal of Mexico in 2015, which determined that in using a government vehicle to travel to polling stations with political candidates and posting about this activity via a Twitter account promoted on an official government webpage, a sitting governor violated the law. As a result, The court annulled the election, though a remedy this extreme is out of step with international good practice on when elections can or should be annulled.  

c. Set limits on the use of personal data by campaigns

Restrictions on the use of personal data by domestic political actors are one avenue some countries are exploring to block the dissemination and amplification of disinformation. Microtargeting, the use of user data to precisely target advertisements and messages to highly specific audiences, has received considerable attention. Microtargeting may enable legitimate political entities, as well as malign foreign and domestic actors, to narrowly tailor advertising to reach highly specific audiences in ways that can enable the opaque dissemination of misleading or otherwise problematic content. By limiting campaigns’ ability to use personal data, regulators may also limit their ability to divisively target advertisements to very narrow audiences.

In the United Kingdom, the UK Information Commissioner’s Office (ICO) launched an investigation in 2017 to look at the use of personal data for political purposes in response to allegations that an individual’s personal data was being used to micro-target political advertisements during the EU Referendum. The ICO fined the Leave.EU campaign and associated entities for improper data protection practices and investigated the Remain campaign on a similar basis.

While the use of data is included in this section on restricting content or behaviors, the topic also has transparency and equity implications. In their analysis of the regulation of online political microtargeting in Europe, academic Tom Dobber and colleagues note that a new Political Parties Act has been proposed in the Netherlands which “include[s] new transparency obligations for political parties with regard to digital political campaigns and political micro-targeting.”20 Dobber goes on to observe that “The costs of micro-targeting and the power of digital intermediaries are among the main risks to political parties. The costs of micro-targeting may give an unfair advantage to the larger and better-funded parties over the smaller parties. This unfair advantage worsens the inequality between rich and poor political parties and restrains the free flow of political ideas.”21

Limitations on the use of personal data for political campaigning are generally included in larger policy debates around data privacy and individual’s rights over their personal data. In Europe, for example, the EU’s General Data Protection Regulation (GDPR) places restrictions on political parties’ ability to buy personal data, and voter registration records are inaccessible in most countries.22 The subject of data privacy is explored further in the topical section on norms and standards.

d. Limit political advertising to entities that are registered for the election

Some jurisdictions limit the type of entities that are able to run political advertisements. Albanian electoral law, for example, stipulates that “only those electoral subjects registered for elections are entitled to broadcast political advertisements during the electoral period on private radio, television or audio-visual media, be they digital, cable, analog, satellite or any other form or method of signal transmission.”23 In Bowman v. the United Kingdom, the European Court of Human Rights ruled that it is acceptable for countries to place financial limitations on non-contestant campaigning that is in line with limits for contestants, though the court also ruled that unduly low spending limits on non-contestants create barriers to their ability to freely share political views, violating Article 10 of the Convention.24

Though candidates and parties may engage to various degrees in the dissemination of falsehoods and propaganda via their official campaigns, efforts intended to impact the information environment at scale will utilize unofficial accounts or networks of accounts to achieve their aims. Furthermore, such accounts are easily set up, controlled, or disguised to appear as though they are coming from extraterritorial locations, rendering national enforcement toothless.

In practice, measures to restrict advertisements run by a non-contestant would only be enforceable with compliance from social media companies – either through blanket restrictions on or pre-certification for political advertisements upheld by the platforms. Outside of a large market such as India or Indonesia, which have gained a degree of compliance from the platforms in enforcing such restrictions, this seems unlikely. The other route with the potential to make such a measure enforceable would be if the platforms complied with government user-data requests from national oversight bodies that would seek to enforce violations. This presents a host of concerns for selective enforcement and potential violation of user privacy, particularly in authoritarian environments where such data could be misused to target opponents or other dissidents.

e. Ban the distribution or creation of deepfakes for political purposes

Another legislative approach is to ban the use of deepfakes for political purposes. Several U.S. States have passed or proposed legislation to this effect, including Texas,  California, and Massachusetts. Updates to U.S. federal law in 2020 also require, among other things, the notification of the U.S. legislature by the executive branch in instances where foreign deepfake disinformation activities target US elections. Definitions of deepfakes in these pieces of legislation focus on an intent to deceive through highly realistic manipulation of audio or video using artificial intelligence.

It is conceivable that existing statutes related to identifying fraud, defamation, or consumer protection might cover the deceptive use of doctored videos and images for political purposes. One study reports that 96 percent of deepfakes involve the nonconsensual use of female celebrities’ images in pornography, suggesting that existing provisions related to identity fraud or non-consensual use of intimate imagery may also be applicable. Deepfakes are often used to discredit women candidates and public officials, so sanctioning the creation and/or distribution of deepfakes, or using existing legal provisions to prosecute the perpetrators of such acts, could have an impact on disinformation targeting women that serve in a public capacity.

f. Criminalize dissemination of fake news or disinformation

One common approach to regulation is the introduction of legal provisions that criminalize the disseminators or creators of disinformation or fake news. This is a worrisome trend as it has significant implications for freedom of expression and freedom of the press. As discussed in the definition section Why are definitions of fake news and disinformation problematic?, the extreme difficulty of arriving at clear definitions of prohibited behaviors can lead to unjustified restrictions and direct harms to human rights. Though some countries adopt such measures in recognition of and out of an attempt to mitigate the impact of disinformation on political and electoral processes, such provisions are also opportunistically adopted by regimes to stifle political opposition and muzzle the press. Even in countries where measures might be undertaken in a good faith attempt to protect democratic spaces, the potential for abuse and selective enforcement is significant. Governments have also passed a number of restrictive and emergency laws in the name of curbing COVID-related misinformation and disinformation with similarly chilling implications for fundamental freedoms. The Poynter Institute maintains a database of anti-misinformation laws with an analysis of their implications. 

Before adopting additional criminal penalties for the dissemination of disinformation, legislators and regulators should consider whether existing provisions in the criminal law such as those covering defamation, hate speech, identity theft, consumer protection, or the abuse of state resources are sufficient to address the harms that new criminal provisions attempt to address. If the existing criminal law framework is deemed insufficient, revisions to criminal law should be undertaken with caution and awareness of the potential for democratically damaging downstream results.

“If we want to fight hoaxes, it’s not through criminal law, which is too rigid.” — Indonesian Civil Society Representative

It should be noted that some attempts have been made to legislate against online gender-based violence, which sometimes falls into the category of disinformation. Scholars Kim Barker and Olga Jurasz consider this question in their book, Online Misogyny as Hate Crime: A Challenge for Legal Regulation?, where they conclude that existing legal frameworks have been unsuccessful in ending online abuse because they focus more on punishment after a crime is committed rather than on prevention.

ii. Restrict content or behaviors: Measures directed at social media and technology platforms

National legislation directed at social media and technology platforms is often undertaken in an attempt to increase domestic oversight over these powerful international actors who have little legal obligation to minimize the harms that stem from their products. Restrictions on content and behaviors that compel platform compliance can make companies liable for all of the content on their platforms, or more narrowly target only the paid advertising on their platforms. In this debate, platforms will argue, with some merit, that it is nearly impossible for them to screen billions of daily individual user posts. Conversely, it may be more reasonable to expect social media platforms to scrutinize paid advertising content.

As discussed in the section on domestic actors, some countries prohibit paid political advertising outside of the campaign period, some restrict paid political advertising altogether, while others limit the ability to place political advertisements only to entities that are registered for the election. In some instances, countries have called on social media companies to enforce these restrictions by making them liable for political advertisements on their platforms. 

Placing responsibility on the platforms to enforce national advertising restrictions also has the potential to create a barrier for political or issue advertisements placed by seemingly non-political actors or by unofficial accounts affiliated with political actors. However, if national regulators do take this approach, the difficulties of compliance with dozens if not hundreds of disparate national regulatory requirements are certain to be a point of contention with the companies. Like any other measure that places boundaries on permissible political expression, it also carries the potential for abuse.


The global conversation around platform regulations that would fundamentally alter the business practices of social media and technology companies – anti-trust or user data regimes, for example –are beyond the scope of this chapter. The focus instead is attempts at a national level to place enforceable obligations on the platforms that alter the way that they conduct themselves in a specific national jurisdiction. 

Often, the enforceability of country-specific regulations placed on the platforms will differ based on the perceived political or reputational risk associated with inaction in a country, which can be associated with market size, geopolitical significance, potential for electoral violence, or international visibility. That being said, some measures are more easily complied with in the sense that they do not require platforms to reconfigure their products in ways that have global ramifications and thus are more easily subject to national rule making. 

The ability of a country to compel action from the platforms can also be associated with whether the platforms have an office or legal presence in that country. This reality has spawned national laws requiring platforms to establish a local presence to respond to court orders and administrative proceedings. Germany has included a provision to this effect in their Interstate Media Treaty. Requirements to appoint local representatives that enable platforms to be sued in court become highly contentious in countries that lack adequate legal protections for user speech and where fears of censorship are well-founded. A controversial Turkish law went into effect on October 1, 2020 requiring companies to appoint a local representative accountable to local authorities’ orders to block content deemed offensive. U.S.-based social media companies have chosen not to comply at the urging of human rights groups, and face escalating fines and possible bandwidth restrictions that would throttle access to the platforms in Turkey in the case of continued non-compliance. This contrast illustrates that challenges social media platforms must navigate in complying with national law. Measures that constitute reasonable oversight in a country with robust protections for civil and political rights might serve as a mechanism for censorship in another.

At the same time, joint action grounded in international human rights norms could be one way for countries with less individual influence over the platforms to elevate their legitimate concerns. The Forum on Information and Democracy’s November 2020 Policy Framework articulates the challenge of harmonizing transparency requirements while preventing politically motivated abuse of national regulations. While joint action at the level of the European Union is occurring, the report points to the possibility of the Organization for American States, the African Union, the Asia-Pacific Economic Cooperation or Association of Southeast Asian Nations, or regional development banks as potential organizing forums for joint action in other regions.


 

a. Hold platforms liable for all content and require removal of content     

The debate over what content should be allowable on social media platforms is global in scope. Analysis on this topic is prolific and global consensus is unlikely to emerge given legitimate and differing definitions of the bounds that can and should be placed on speech and expression. Many of these measures that introduce liability for all content have hate speech as a central component. While hate speech is not limited to political or electoral periods, placing pressure on societal fault lines through the online amplification of hate speech is a common tactic used in political propaganda and by disinformation actors during electoral periods.

Some national jurisdictions have attempted to introduce varying degrees of platform responsibility for all the content hosted on their platforms, regardless of whether that is organic or paid content. 

The German Network Enforcement Act (NetzDG) requires social media companies to delete “manifestly unlawful” content within 24 hours of being notified. Other illegal content must be reviewed within seven days of being reported and deleted if found to be in violation of the law. Failure to comply carries up to a 5 million euro fine, though the law exempts providers who have fewer than 2 million users registered in Germany. The law does not actually create new categories of illegal content; its purpose is to require social media platforms to enforce 22 statutes on online content that already exist in the German code. It targets already-unlawful content such as “public incitement to crime,” “violation of intimate privacy by taking photographs,” defamation, “treasonous forgery,” forming criminal or terrorist organizations, and “dissemination of depictions of violence.” It also includes Germany’s well-known prohibition of glorification of Nazism and Holocaust denial. The takedown process does not require a court order or provide a clear appeals mechanism, relying on online platforms to make these determinations.25

The law has been criticized as being too broad and vague in its differentiation of “unlawful content” and “manifestly unlawful content.” Some critics also object to NetzDG as a “privatized enforcement” law because online platforms assess the legality of the content, rather than courts or other democratically legitimate institutions. It is also credited with inspiring a number of copycat laws in countries where the potential for censoring legitimate expression is high. As of late 2019, Foreign Policy identified 13 countries that had introduced similar laws; the majority of these countries were ranked as “not free” or “partly free” in Freedom House’s 2019 Freedom of the Internet assessment.26

France, which has pre-existing rules restricting hate speech, also introduced measures similar to those in Germany to govern content online. However, the French constitutional court overturned these measures in 2020, which similar to the German law would have required platforms to review and remove hateful content flagged by users within 24 hours or face fines. The court ruled that the provisions in the law would lead platforms to adopt an overly conservative attitude toward removing content in order to avoid fines, thus restricting legitimate expression.

The United Kingdom is another frequently cited example that illustrates various approaches to regulating harmful online content, including disinformation. A 2019 Online Harms White Paper outlining the UK government’s plan for online safety proposed placing a statutory duty of care on internet companies for the protection of their users, with oversight by an independent regulator. A public consultation period for the Online Harms Paper informed proposed legislation in 2020 that focuses on making the companies responsible for the systems they have in place to protect users from harmful content. Rather than require companies to remove specific pieces of content, the new framework would require the platforms to provide clear policies on the content and behavior that are acceptable on their sites and enforce these standards consistently and transparently. 

These approaches contrast with the Bulgarian framework, for example, which exempts social media platforms from editorial responsibility.27 Section 230 of the Communications Decency Act of the United States law also expressly releases social media platforms from vicarious liability.

Other laws have been proposed or enacted in countries around the globe that introduce some degree of liability or responsibility for platforms to moderate harmful content on their platforms. Broadly speaking, this category of regulatory response is the subject of fierce debate on the potential for censorship and abuse. The models in Germany, France, and the United Kingdom have frequently cited examples of attempts by consolidated democracies to more actively imposing a duty on platforms for the content they host while incorporating sufficient checks to protect freedom of expression – though measures in all three countries are also criticized for the ways they have attempted to strike this balance. These different approaches also illustrate how a proliferation of national laws introducing platform liability is poised to place a multitude of potentially contradictory obligations on social media companies. 

b. Prohibit platforms from hosting paid political advertising

Some jurisdictions prohibit paid campaign advertising in traditional media outright, with that ban extending or potentially extending to paid advertising on social media.28 “For decades, paid political advertising on television has been completely banned during elections in many European democracies. These political advertising bans aim to prevent the distortion of the democratic process by financially powerful interests and to ensure a level playing field during elections.”29 

The French Electoral Code stipulates that for the 6 months prior to the month of an election, commercial advertising for the purposes of election propaganda via the press or “any means of audiovisual communication” is prohibited.30 A stipulation such as this is contingent on clear definitions of online campaigning and political advertising; amendments to the French Electoral Code in 2018, for example, attempt to inhibit a broad range of political and issue advertisements by stipulating that the law applies to “information content relating to a debate of general interest,”31 rather than limiting the provision to ads that directly reference candidates, parties, or elections. In the French case, these provisions along with a number of transparency requirements discussed in the sections below, led some platforms, such as Twitter, to ban all political campaign ads and issue advocacy ads in France, a move that was later expanded into a global policy. Similarly, Microsoft banned all ads in France “containing content related to debate of general interest linked to an electoral campaign,” which is also now a global policy. Google banned all ads containing “informational content relating to a debate of general interest” between April and May 2019 across its platform in France, including YouTube.32 The French law led Twitter to initially block an attempt by the French Government’s information service to pay for sponsored tweets for a voter registration campaign in the lead-up to European parliamentary elections, though this position was eventually reversed.

The French ban on issue advertising on social media was legitimated by a parallel ban on political advertising via print or broadcast media. Other jurisdictions seeking to impose restrictions on social media advertising might similarly consider aligning those rules with the principles governing offline or traditional media advertising. 

c. Hold platforms responsible for enforcing restrictions on political advertisements run outside a designated campaign period 

Some jurisdictions have opted to place responsibility on the entities that sell political advertisements, including social media companies, to enforce restrictions on advertising outside of the designated campaign period – both before the campaign period begins as well as during official silence periods in the day or days directly before the election.

Indonesia had some success calling on the platforms to enforce the three-day blackout period prior to its 2019 Elections. According to interlocutors, Bawaslu sent a letter to all of the platforms advising them that they would enforce criminal penalties should the platforms allow paid political advertising on their platforms during the designated blackout period. Despite responses from one or more of the platforms that the line between advertising in general and political advertising was too uncertain to enforce a strict ban, Bawaslu insisted that the platforms find a way to comply. The platforms in turn reported rejecting large numbers of advertisements during the blackout period. Bawaslu’s restrictions applied only to paid advertising, not organic posts.

Under India’s “Voluntary Code of Ethics for the 2019 General Election,” social media companies committed themselves to take down prohibited content within three hours during the 48-hour silence period before polling. The signatories to the Code of Ethics developed a notification mechanism through which the Election Commission could inform relevant platforms of potential violations of Section 126 of the Representation of the People Act, which bars political parties from advertising or broadcasting speeches or rallies during the silence period.

India and Indonesia are both very large markets, and most global social media companies have a physical presence in both countries. These factors significantly contribute to these countries’ abilities to compel platform compliance. This route is unlikely to be as effective in countries that do not have as credible a threat of legal sanction over the platforms or the ability to place penalties or restrictions on the platforms in a way that impacts their global business.  

For countries that do attempt this route, as with restrictions on social media campaigning placed on domestic actors, restrictions that rely on the platforms for enforcement must also acknowledge the definitional distinctions between paid and unpaid content and between political and issue campaigning, for example, to have any enforceability.  The Canadian framework acknowledges the complexity of enforcing campaign silence online by exempting content that was in place before the blackout period and has not been changed.33 Facebook’s decision to unilaterally institute a political advertising blackout period for the time period directly surrounding the 2020 U.S. Elections also limited political advertising to content already running on the platform. No ads containing new content could be placed. Moves to restrict paid advertising may advantage incumbents or other contestants that have had time to establish a social media audience in advance of the election; paid advertising is a critical tool that can allow new candidates to reach large audiences.  

d. Only allow platforms to run pre-certified political advertisements

During the 2019 elections, the Election Commission of India required that paid online advertising that featured the names of political parties or candidates be vetted and pre-certified by the Election Commission. Platforms, in turn, were only allowed to run political advertisements that had been pre-certified.34 

This measure only applied to a narrow band of political advertisements – any issue ads or third-party ads that avoid explicit mention of parties and candidates would not need to be pre-certified under these rules. For other countries, implementation of a pre-certification requirement would necessitate institutional capacity on par with Indian electoral authorities to make the vetting of all ads possible, as well as the market size and physical presence of company offices in-country to get the companies to comply. 

Mongolia’s draft electoral laws would require political parties and candidates to register their websites and social media accounts. These draft laws would also block access to websites that run content by political actors that do not comply. The provision worded as such seems to penalize third-party websites for breaches committed by a contestant. Provisions further require that the comments function on official campaign websites and social media accounts should be disabled, and non-compliance with this provision incurs a fine.35 As the law is still in draft form, the enforceability of these measures has not been tested at the time of publication.

e. Obligate platforms to ban advertisements placed by state-linked media 

At present, social media platforms have differing policies on the ability of state-controlled news media to place paid advertising on their platforms. While platforms have largely adopted restrictions on foreign actors’ ability to place political advertising, some platforms still allow state-controlled media to pay to promote their content to foreign audiences more generally. Twitter has banned state-controlled media entities from placing paid advertising of any kind on their platform.36 For countries where Facebook’s Ad Library is being enforced, the advertiser verification process attempts to prohibit foreign actors from placing political advertising. However, Facebook does not currently restrict the ability of state-linked media to pay to promote their news content to foreign audiences, a tool that state actors use to build foreign audiences.

Analysis by the Stanford Internet Observatory demonstrates how Chinese state media uses social media advertising as a part of broader propaganda efforts and how such efforts were used to build a foreign audience for state-controlled traditional media outlets and social media accounts. The ability to reach this large audience was then used to deceptively shape favorable narratives about China during the coronavirus pandemic.

Prohibitions against foreign state-linked actors paying to promote their content to domestic audiences could be tied to other measures that attempt to bring transparency in political lobbying. For example, some experts in the U.S. propose applying the Foreign Agents Registration Act (FARA) to restrict the ability of foreign agents registered under FARA to advertise to American audiences on social media. This in turn requires a consistent and proactive effort on the part of U.S. authorities to require that state media is identified and registered as foreign agents. Rather than prohibit ads placed by known foreign agents, another option is to require platforms to label such ads to increase transparency. Several platforms have independently adopted such provisions,37 though enforcement has been inconsistent.

f. Restrict how platforms can target advertisements or use personal data

Another avenue being explored in larger markets is placing restrictions on the ways in which personal data can be used by platforms to target advertising. Platforms, to some degree, are adopting such measures in the absence of specific regulation. Google, for example, allows a narrower range of targeting criteria to be used to place election ads compared to other types of advertisements. Facebook does not limit the targeting of political ads, though they offer various tools to provide a degree of transparency for users on how they are being targeted. Facebook also allows users to opt-out of certain political ads, though these options are only available in the United States as of early 2021. Less well-understood are the tools used by streaming television services to target ads. It is unlikely that national-level regulation of this nature outside of the U.S. or EU will have the ability to alter the platforms’ policies. Further discussion on this topic can be found in the topical section on platform responses to disinformation.

Measures that promote transparency can include obligations for domestic actors to disclose the designated political activities they engage in on social media, as well as obligations for digital platforms to disclose information on the designated political activities that take place on their platforms or to label certain types of content that may otherwise be misleading. These measures are part of the regulatory push back against disinformation as they allow insight into potentially problematic practices being used by domestic political or foreign actors and build public understanding of the origins of the content they are consuming. Transparency creates the opportunity for the public to make better-informed decisions about their political information. 

i. Promote Transparency: Measures directed at domestic actors
a. Require the declaration of social media advertising as a campaign expenditure

One of the most common approaches to promoting increased transparency by domestic actors is to expand the definition of “media” or “advertising” that is subject to existing disclosure requirements to include online and social media advertising. Expansions of this nature should take into account the definitional considerations at the beginning of this section of the guidebook. Detailed disclosure requirements may be required to delineate which types of expenditures constitute social media advertisements, including, for example, payments to third parties to post supportive content or attack opponents. While expanding existing disclosure requirements extends existing principles of transparency, crafting meaningful disclosure requirements necessitates careful consideration of the ways in which social media and online advertising differ from non-digital forms of political advertising.

To offer illustrative examples, section 349 of Canada’s Elections Act has extensive regulation on third-party expenditure and the use of foreign funding, which captures paid advertising online. A draft resolution in Colombia has also been put forth with the aim of categorizing paid advertising on social media as a campaign expenditure subject to spending limits. The resolution would empower Colombian electoral authorities to investigate these expenditures, given that they are often incurred by third parties and not by the campaign itself. It would establish a register of online media platforms that sell political advertising space and subject political advertising on social media to the same framework as political campaigning in public spaces. 

b. Require registration of party and candidate social media accounts 

While monitoring the official accounts of parties and candidates provides only a narrow glimpse into political advertising and political messages circulating on social media, having a record of official social media accounts is a first step toward transparency. This could be achieved by requiring candidates and parties to declare the accounts that are administered by or financially linked to their campaigns. This approach can provide a starting point for oversight bodies to monitor compliance with local laws and regulations governing campaigning. Such a requirement could be paired with a regulation that stipulates that candidates and campaigns may only engage in certain campaign activities through registered social media accounts, such as paying to promote political content or issue ads. This combination of measures can create an avenue for enforcement in instances where parties or candidates are found to be using social media accounts in prohibited ways of concealing financial relationships with nominally independent accounts. Enforcement would necessitate monitoring for compliance, which is discussed in the Enforcement subsection at the end of this topical section of the guidebook.

This approach has been taken in Tunisia, where a directive issued by the country’s election commission requires candidates and parties to register their official social media accounts with the commission.38 Mongolia’s draft election laws would also impose an obligation for the candidate, party, and coalition websites and social media accounts to be registered with the Communications Regulatory Commission (for parliamentary and presidential elections) and with the respective election commission (for local elections).39 The Mongolian law in its entirety should not, however, be taken as a model as it raises concerns related to freedom of expression and enforcement limitations given definitional vagueness. 

c. Require disclosure and labeling of bots or automated accounts

“Bots” or “Social Bots,” which can perform automated actions online that mimic human behaviors, have been used as a part of disinformation campaigns in the past, though the degree to which they have impacted electoral outcomes is disputed.40 When deployed by malign actors in the information space, these lines of code can, for example, power artificial social media personas, generate and amplify social media content in large quantities, and be mobilized to harass legitimate social media users. 

As public awareness of this tactic has grown, lawmakers have attempted to legislate in this area to mitigate the problem. Legislative approaches that seek to ban the use of bots have largely failed to gain traction. A measure to criminalize bots or software used for online manipulation was proposed in South Korea, for example, but ultimately was not enacted. A proposed bill in Ireland to criminalize the use of a bot to post political content through multiple fake accounts also failed to become law. 

Opinion is divided on the efficacy and freedom of expression implications of such measures. Detractors of this approach suggest that such legislation can inhibit political speech and that overly broad measures can undermine legitimate political uses for bots, such as a voter registration drive or an electoral authority using a chatbot to respond to common voter questions. Detractors also suggest that legislating against specific disinformation tactics is a losing battle given that tactics evolve so quickly. Removing networks of automated bots also aligns with social media platforms’ reputational self-interest, so that legislation against such operations may not be necessary. 

Efforts to add transparency and disclosure to the use of bots may be a less controversial approach than criminalizing their use. California passed a law in 2019 making it illegal to “use a bot to communicate or interact with another person in California online with the intent to mislead the other person about its artificial identity.” Germany’s Interstate Media Treaty (Medienstaatsvertrag – “MStV”) also includes provisions that promote transparency around bots by obligating platforms to identify and label content that is disseminated by bots.  Measures that criminalize or require disclosure of the use of bots do present challenges for enforcement given difficulty in reliably identifying bots.

 

“By the time lawmakers get around to passing legislation to neutralize a harmful feature, adversaries will have left it behind.” — Renee DiResta, Research Director at the Stanford Internet Observatory

d. Require disclosure of the use of political funds abroad

Facing tightening regulations in their home countries, political actors might also seek to place political advertisements on social media by coordinating with actors located outside of the country. Foreign funding might also be used to place advertisements that target diaspora communities eligible for out-of-country voting. While platforms with political ad disclosure and identification requirements will in some cases prohibit the purchase of political advertisements in foreign currencies or by accounts operated from another country, these efforts are not yet sufficient to catch all political or issue advertisements placed extraterritorially. 

Disclosure requirements that address foreign funding may wish to consider the ways in which foreign expenditures on social media advertising might differ from traditional media. New Zealand, for example, requires full disclosure of any advertising purchased by entities outside of the country, so that non-abidance constitutes a campaign finance violation.41 It could, however, be difficult to prove the beneficiary political party or candidate is aware of campaign funding being expended to their benefit extraterritorially, which could render enforcement futile. 

ii. Promote Transparency: Measures directed at platforms
a. Require platforms to maintain ad transparency repositories

Some countries have imposed legal obligations on larger online platforms to maintain repositories of the political advertisements purchased on their platforms. France and Canada, for instance, require large online platforms to maintain a political ad library. India’s Code of Ethics, signed by social media companies operating in the country ahead of 2019 elections, committed signatories to “facilitating transparency in paid political advertisements, including utilizing their pre-existing labels/disclosure technology for such advertisements.” This measure may have been decisive in compelling these companies to expand coverage of their ad transparency features to India.

Facebook voluntarily introduced a publicly accessible Ad Library in a very limited number of countries in 2018, and as of early 2021 has since expanded coverage to 95 countries and territories. Google maintains political ad transparency disclosures for Australia, the EU and UK, India, Israel, New Zealand, Taiwan, and the United States but has been slower to expand these tools to additional markets. As platforms contemplate where to next expand their advertising transparency tools, it is conceivable that updating national law to require platforms to maintain ad repositories could influence how companies prioritize countries for expansion. Details on the functionality of advertising transparency tools can be found in the guidebook section covering platform responses to disinformation

Legal mandates, however, might disadvantage smaller online platforms, since the cost of setting up and maintaining advertising repositories might be disproportionately higher for smaller platforms than for larger platforms. The legal requirement might thereby inadvertently stifle platform plurality and diversity. This side effect can be remedied by creating a user threshold for the obligation. For example, Canada’s ad transparency requirements apply only to platforms with more than three million regular users in Canada,42 though even this threshold might be too low to avoid becoming a barrier to competition. National regulators might also consider a standard whereby a platform is required to provide ad transparency tools if a certain percentage of the country’s population uses their services.

Some countries where the platforms do not maintain ad repositories have experimented with their own. Ahead of the 2019 elections, South Africa tested a new political ad repository, built in partnership with election authorities and maintained by civil society. Compliance was not obligatory and was accordingly minimal among political parties, but the effort showed sufficient promise that the implementers of the ad repository are considering making compliance legally mandatory for future elections.43 

Legal measures that compel, or attempt to compel, platforms to maintain ad repositories might also incorporate provisions requiring the clear labeling of advertisers to distinguish between paid and organic content, as well as labels that distinguish among advertisements, editorial, and news content. Requirements to label content originating from state-linked media sources might also be outlined. Measures might also include identity verification requirements for actors or organizations that run political and issue advertisements. However, these provisions would likely require alterations to the functionality of the platform’s ad transparency tools, a change that is more likely with joint pressure from multiple countries.

b. Require platforms to provide algorithmic transparency 

Additional measures being explored in France, Germany, and elsewhere focus on compelling platforms to provide greater insight into the algorithms that influence how content – organic and paid - is surfaced to individual users, or, put another way, transparency for users into how their data is used to inform the ads and content that they see. 

Germany’s MStV law, for example, introduces new definitions and rules intended to promote transparency across a comprehensive array of online portals and platforms. “Under the transparency provisions, intermediaries will be required to provide information about how their algorithms operate, including: [1] The criteria that determine how content is accessed and found. [2] The central criteria that determine how content is aggregated, selected, presented and weighed.”44 EU law on comparable topics has in the past drawn on German law to inform its development, suggesting that this route may influence conversations at the EU-level on platform transparency and, subsequently, include the global operations of digital media providers and intermediaries. 

The Forum on Information and Democracy’s November 2020 Policy Framework provides a detailed discussion on how algorithmic transparency might be regulated by state actors.45

Measures designed to promote equity can include creating and enforcing spending caps for political parties and candidates with the goal of creating a level playing field for less financially well-resourced contenders. Other countries are experimenting with obligations for platforms to provide equitable advertising rates or provide free, equitably available ad space to candidates and parties.

Promoting equity as a deterrent to disinformation is an acknowledgment of the financial foundations of many coordinated disinformation campaigns. By providing political contestants with more equitable opportunities to be heard by the electorate, these measures attempt to lessen the advantage of financially well-resourced contenders who may – among other tactics – direct resources toward the promotion of disinformation to skew the information space. Strategies that promote equity can also benefit women, people with disabilities, and people from marginalized groups who are often less well-resourced than their more privileged counterparts and who are often targets of disinformation campaigns. 

i. Promote equity: Measures directed at domestic actors
a. Cap party or candidate social media expenditures

An approach to leveling the playing field on social media is capping how much each party or candidate can spend on social media, either as an absolute cap or as a percentage of overall campaign spending. 

Romania, for example, caps expenditure for paid social media advertising at 30 percent of the overall allowed spending.46 In the U.K., spending on social media is counted toward candidates’ and parties’ applicable spending limit and must be reported. Any material published on social media that is “election material” – i.e., promotes or opposes: specific political parties, candidates or parties that support particular policies or issues, or types for candidates, and is made available to the public – counts toward the limit.47

These measures do, however, require respective countries to operate effective campaign spending disclosure and investigation mechanisms—an asset most democracies lack. 

ii. Promote equity: Measures directed at platforms
a. Require platforms to publish advertising rates and treat electoral contestants equally

Multiple countries have updated their legal frameworks to extend the principle of equity in the pricing of political advertisements to social media. In the context of traditional media, legal and regulatory measures might be used to ensure that candidates and parties have access to the same advertising opportunities at the same price. For example, measures requiring television, radio, or print media to publish their advertising rates as a means to ensure all actors have equal access to these distribution channels and that outlets cannot censor certain political views by charging different rates. 

Extending this logic to social media – where advertising views are often determined in real-time online auctions that take place in the blink of an eye as users scroll through their social media feeds or refresh their internet browsers – presents a different challenge. The cost to place an ad will fluctuate based on numerous factors that determine how much demand exists to reach specific users. For example, in 2019 during the U.S. Democratic Primary Elections, the cost of reaching likely-Democratic voters and donors on Facebook increased dramatically as the 20 candidates competing for the Democratic presidential nomination drove up demand, with implications for down-ballot candidates trying to reach voters as well. The cost for Republican candidates and organizations to reach voters were significantly less given that there was no competitive Republican presidential primary to drive up demand.

Despite the complexity of advertising price determinations on social media, multiple countries have attempted to regulate in this area:

  • Paraguay stipulates that social media platforms that alter their advertising rates in ways that favor any party or political movement over another will be subject to a fine.48
  • El Salvador’s Electoral Code references a constitutional obligation that the media must provide information on the rates they charge for their services, and that the constitutional principle of equity in pricing among political parties is applicable in the case of social media.49 
  • Venezuelan regulations bar social media platforms from endorsing or supporting candidates while enjoining them from refusing to accept paid advertising from any candidates.50 

Requiring social media platforms to institute a standard of equity among parties and candidates would require changes to how advertisements are selected and shown to users or how they are priced. Requiring social media platforms to treat candidates and parties equitably presents a range of questions for enforcement, but it is an important principle to consider given platforms’ immense power in this regard. Companies have the technological edge to advantage or disadvantage preferred candidates by, for example, more effectively targeting some ads of candidates who have more favorable positions towards the platforms themselves. Recent examples in India and the United States have demonstrated the ways in which political pressure and public perception can shape content moderation decisions. Platform actions in this regard would be largely undetectable with the transparency tools available in many countries, and it is uncertain whether such practices would constitute a violation under current legal and regulatory frameworks.

Another possibility is to require that social media platforms publish advertising rates. This type of provision could be incorporated into the standards required of a political ad library or another ad repository, which would allow transparency into the comparative rates that parties and candidates are paying to get their messages out. A movement to create equity in political advertising would likely require increased global pressure from multiple countries – including large markets such as the EU and the U.S. to gain traction, but it is an underexplored avenue. There would also likely be a discussion about how equity should be conceived in light of the different nature of online advertising.

b. Compel platforms to provide free advertising space to candidates and parties

The laws and regulations of some countries stipulate that traditional media providers give, in equal measure, free advertising time or space to political parties or candidates that meet predetermined criteria. This is intended to provide competing parties more equitable access to bring their platforms and ideas to the electorate regardless of their financial resources. 

The present study has not identified any jurisdictions that require social media platforms to grant equal free advertising space to candidates or political parties. However, the Bulgarian framework allows social media platforms to equitably allocate free advertising space to electoral contestants and requires the platforms to disclose how they allocate it among candidates and parties.51 The Bulgarian approach could serve as a pilot precursor for countries that contemplate compelling social media platforms to offer free campaign advertising space on an equal basis. It is feasible that a national-level provision that draws on existing national law to extend the precedent of equitable free advertising would be able to prevail on major social media companies to provide ad credits to qualified parties, though this is as of yet untested.

 

Measures to promote democratic information are less prevalent, but they do present an opportunity to obligate platforms, and possibly, domestic actors to proactively disseminate unbiased information in ways that can build resilience to political and electoral disinformation. Though there are few real-world examples, this category provides an opportunity to consider what types of legal and regulatory approaches might be feasible.

“Solutions could be aimed at enhancing individual access to information rather than merely protecting against public harm.” — David Kaye, United Nations Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression

i. Promote Democratic Information: Measures directed at domestic actors
a. Require parties and candidates to issue corrections when party members or supporters share bad information 

South Africa’s draft code of conduct on Measures to Address Disinformation Intended to Cause Harm During the Election Period (detailed discussion of this code of conduct (CoC) can be found in the topical section on EMB approaches to countering disinformation) stipulates that the Election Commission can compel parties and candidates to correct electoral disinformation that is shared by parties, candidates, or their members and supporters; “the registered party or candidate shall act immediately to take all reasonable measures in an effort to correct the disinformation and remedy any public harm caused, as may be appropriate in the circumstances and in consultation with the Commission.”52 

South Africa’s CoC defines electoral disinformation with specificity and provides a framework for reporting and ruling on violations, which makes these provisions implementable. Definitional specificity around what types of electoral disinformation would be subject to correction and an independent oversight body are necessary for this approach to have an impact and not place undue obligations on political contestants.

If narrowly tailored and enforced, a mechanism to compel political contestants to correct information damaging to the credibility of the electoral process through their own networks of supporters has the potential to reach impacted audiences via the same channels where they might have encountered the problematic content. This, in turn, can amplify messages that election authorities are attempting to disseminate widely.

ii. Promote Democratic Information: Measures directed at platforms
a. Require platforms to offer election authorities free advertising space for voter education

While requiring public or private media outlets to provide equal, free advertising space to political contestants has precedent in a number of countries, another route is to mandate that free advertising space be made available to election authorities. Social media platforms offering free ad-space to election management bodies could be a useful and enforceable provision that could, for example, help boost turnout, educate voters in ways that mitigate invalid voting, or enhance marginalized groups’ access to information. 

Using public media channels for this purpose is common practice. In addition, some countries require private media actors to offer free space to election authorities. In Mexico, the Constitution stipulates that during electoral periods, radio and television broadcasters must provide 48 minutes of free advertising space every day to be divided between electoral authorities, with space also reserved for messages from political parties.53 Air time is also provided in a more limited amount during non-electoral periods.54 Outside of this allotted time, political parties and candidates are not allowed to buy or place any additional television or radio advertisements.55 Venezuelan electoral law also requires private television providers to offer free advertising space to the election management body for civic education and voter information.56 

Obligating private companies to serve as a channel for voter education is an interesting idea. Major platforms, including Facebook, Google, Instagram, and Twitter, have elected to provide voter information, such as election day reminders and instructions on how to vote, of their own volition (see the subcategory on EMB coordination with social media and technology companies). Ahead of the 2020 U.S. elections, Facebook also voluntarily launched a new tool called Voting Alerts that allowed state and local election authorities to reach their constituents with notifications on Facebook, whether or not the Facebook user followed the election authority’s Facebook Page. Given the voluntary nature of such measures, voter information integration into the platforms does not take place in all countries or for all elections. Platforms are less likely to roll out features for local or municipal elections, even if those elections are taking place nation-wide, than they are for presidential or parliamentary elections. Considering a requirement for platforms to provide free space for voter education as a part of the legal and regulatory code, particularly in countries where an analogous precedent exists for traditional or public media, could be something to explore. Additionally, requirements for advertisement-funded streaming internet television providers, search engines, or other media intermediaries could also be considered as another place to require advertisements to be integrated. 

 

Thoughtful regulation means little if it is not accompanied by meaningful consideration of how that regulation will be enforced. A lack of realism about enforcement threatens to undercut the authority of the regulatory bodies creating provisions and establishes unrealistic precedents for what will be achievable through regulation alone.

The levers of enforcement will change depending on whether provisions are aimed at domestic actors or platforms. In the case of the former, governments and political actors that are in office are increasingly complicit in or actively at fault for participation in the very behaviors that the regulatory actions outlined in this document seek to curb. In these instances, the ability to meaningfully enforce provisions will rely on the independence of enforcement bodies from the executive. 

The ability for an individual country to enforce provisions directed at foreign actors is very limited, which is one of the reasons why legal and regulatory approaches directed at foreign actors are not included in this section of the guidebook. 

Provisions directed at platforms will vary significantly in how enforceable they may be. Provisions that require alterations to the platform’s engineering or global business practices are highly unlikely to come from national-level laws passed in anything other than the largest-market countries in the world. However, many major social media platforms have thus far been ahead of lawmakers in instituting new provisions and policies to define and restrict problematic content and behaviors or to promote transparency, equity, and/or democratic information. These provisions have not been rolled out equally though, and where national-level legislation might have an impact is in pushing companies to extend their existing transparency tools to the country in question. Platforms will undoubtedly balance their business interests and the difficulty of implementing a measure against the cost of non-compliance with legal provisions in countries where they operate but do not have a legal presence. Recognizing that many countries in the world have limited ability to enforce legal obligations placed on the platforms, legal and regulatory provisions might instead serve to make a country a higher priority for companies as they globalize their ad transparency policies or promote voter information via their products. 

6.1 Establishing which state entities have an enforcement mandate

Different institutions may have the right of oversight and enforcement over laws governing the intersection of social media and campaigning, and – given that provisions pertinent to this discussion might be scattered across a legal framework in several different laws – oversight may sit with multiple bodies or institutions. A few common types of enforcement bodies are noted below.

In many countries, responsibility for oversight and enforcement may sit with an independent oversight body or bodies. This might be an anti-corruption agency, a political finance oversight body, or a media oversight body, for example. As Germany expands their legal and regulatory framework around social media and elections, implementation and enforcement fall to an independent, non-governmental state media authority. This effort expands the mandate of the body, which has pre-existing expertise in media law, including advertising standards, media pluralism, and accessibility. Analysts of this move to expand German media authorities’ scope of work contend that “it is crucial to carefully consider what, if any, provisions could or should be translated to another European context… While Germany’s media regulators enjoy a high level of independence, the same cannot be said of other member states,” citing research that says more than “half of EU member states lack safeguards for political independence in appointment procedures.”.57 

Responsibility for oversight will often be spread across multiple independent bodies or agencies, necessitating coordination and the development of joint approaches. A Digital Regulation Cooperation Forum has been created in the United Kingdom, for example, which promotes the development of coordinated regulatory efforts in the digital landscape among the UK Information Commissioner’s Office, the Competition and Markets Authority, and the Office of Communications. 

Other countries vest election authorities or election oversight bodies with the implementation and enforcement capacity of some kind. For election authorities that have political finance, campaign finance, or media oversight mandates, the responsibility to oversee provisions related to social media in elections might, in some instances, be naturally added to these existing capacities. Election authorities may be in the position of having a legal mandate to monitor for violations, or they may have adopted this responsibility independently while lacking authority to enforce. In these instances, legal and regulatory frameworks will need to take into account relevant referral mechanisms to ensure detected violations can be shared with the appropriate body for further action. 

In other instances, enforcement sits more directly with the judicial system. In the case of France, judges play a direct role in determining what content constitutes information manipulation. In addition to ordering the removal of the manifest, widely disseminated, and damaging content, judges may also order “any proportional and necessary measure” to stop the “deliberate, artificial or automatic and massive” dissemination of misleading information online. In Argentina, the electoral court is responsible for enforcing violations resulting from advertising that takes place outside of the designated campaign period.58 Any model that relies on the judiciary to determine what constitutes a violation necessitates a fully independent judiciary with the capacity to understand the nuances of information manipulation and to review and respond to cases quickly.59 

6.2 Building capacity to monitor for violations

Without establishing a capacity to monitor, audit, or otherwise effectively provide oversight, laws, and regulation governing the use of social media during elections are unenforceable. The subsection on Social Media Monitoring for Legal and Regulatory Compliance in the guidebook section on Election Management Body Approaches to Countering Disinformation outlines key questions and challenges in defining a monitoring approach. These include:

  • Does the body in question have a legal right to monitor social media?
  • What is the goal of the monitoring effort?
  • What is the time period for social media monitoring?
  • Will the monitoring be an internal operation or conducted in partnership with another entity?
  • Does the body in question have sufficient human and financial resources to carry out the desired monitoring effort?
  • What social media advertising transparency tools are available in the country?

6.3 Considerations for evidence and discovery

The nature of social media and digital content raises new questions in the consideration of evidence and the discovery process. For example, when platforms notify national authorities or make public announcements that they have detected malicious actions on their platforms, it is often accompanied by action to remove the accounts and content in question. When this material is removed from the platform, it is no longer available to authorities that might currently or in the future be capturing the content as evidence of violations of national law. 

Highlight


In instances where a case is being brought against an actor for illegal conduct on social media, a legal request to preserve posts and data may be a step that authorities or plaintiffs need to consider. Dominion Voting Systems, for example, has pursued this action in a series of defamation cases against media outlets and others for falsely claiming that the company's voting machines were used to rig the 2020 U.S. elections. Dominion sent letters to Facebook, YouTube, Parler, and Twitter requesting that the companies preserve posts relevant to their ongoing legal action.

At present, there does not appear to be a comprehensive obligation on major platforms to preserve and provide information or evidence in the case of an investigation into the origins or financing of content and actions that may be violations of local laws. While in instances of violent crimes, human trafficking, and other criminal acts, major U.S.-based platforms have a fairly consistent record of complying with legal requests by governments for pertinent data, the same does not seem to be true in the case of political finance or campaign violations. A means and precedent for making legally-binding requests for user data from the platforms when a candidate or party is under credible suspicion of violating the law is an essential route to explore for enforcement. 

Granted, the platforms also play a critical role in ensuring user data gathered on their platforms is not handed over to government actors for illegitimate purposes. The determination of what does and does not constitute a legitimate purpose is one that necessitates careful deliberation and the establishment of sound principles. There is also likely to be frequent conflict between what platforms deem to be requests for data with the potential for abuse and what the national authorities requesting that data might think. Particularly for countries that have leaned heavily into the use of their criminal code to sanction problematic speech, the platforms may preserve legitimate resistance to complying with requests for user data that have a high potential for abuse.

6.4  Available sanctions and remedies

Countries have used a variety of sanctions and remedies to enforce their legal and regulatory mandates. Most of these sanctions have precedent in existing law as it pertains to analogous offline violations. 

The issuing of fines for political finance or campaign violations has a well-established precedent. In the context of violations of digital campaigning rules, fines are also a common sanction. Argentinian law, for example, stipulates that fines will be issued to human or legal entities that do not comply with content and publication limits on advertisements, including those transmitted via the internet. Argentina’s law assesses the fine in relation to the cost of advertising time, space, or internet bandwidth at the time of the violation.60 

Fines can also be directed at social media companies or digital service providers that do not meet their obligations. Paraguay, for example, holds social media companies vicariously liable and subject to fines for breach of campaign silence, illicit publication of opinion polls, or for engaging in biased pricing.61 It is unclear if Paraguay has successfully levied these fines against any social media companies.

Some legal and regulatory frameworks carry the threat of revoking public funding as a means of enforcement. In contrast to the penalty of a fine for individuals in breach of the law, the Argentinian Electoral Code stipulates that political parties that do not comply with limitations placed on political advertising will lose the right to receive contributions, subsidies, and public financing for a period of one to four years.62 The effectiveness of this sanction is heavily dependent on the extent to which parties rely on public funding for their income.

Provisions might seek to remedy harm by requiring entities found to be in violation of the law to issue corrections. As referenced in the section on promoting democratic information, South African regulation stipulates that the election commission can compel parties and candidates to correct electoral disinformation that is shared by parties, candidates, or their members and supporters. However, mandates to provide corrections can be manipulated to serve partisan interests; Singapore’s Protection from Online Falsehoods and Manipulation Act in 2019, which has been subject to heavy criticism for its use to silence opposition voices, requires internet service providers, social media platforms, search engines, and video-sharing services like YouTube to issue corrections or remove content if the government deems it false and that a correction or removal is in the public interest. The law specifies that a person who has communicated a false statement of fact may be required to make a correction or remove it even if the person has no reason to believe the statement is false.63 Individuals who do not comply are subject to fines up to $20,000 and imprisonment.64

Another sanction is the banning of a political party or candidate from competing in an election. The Central Election Commission of Bosnia and Herzegovina fined and banned a party from participating in 2020 elections for sharing a video that violated a provision against provoking or inciting violence or hatred,65 though this decision was overturned by the courts upon appeal. This sanction is at high risk of political manipulation and, if considered, must be accompanied by sufficient due process and a right of appeal.

In some instances, enforcement has resulted in the annulment of election results. The Constitutional Court of Moldova annulled a mayoral election in the city of Chisinau because both competitors were campaigning on social media during the campaign silence period. In the aftermath of this decision, which was viewed by many as disproportionate to the offense, Moldovan regulators introduced a new provision allowing campaign materials on the internet which were placed before Election Day to remain visible. Election annulment is an extreme remedy that is highly vulnerable to political manipulation and should be considered in the context of international best practice on validating or annulling an election.

Countries have banned or threatened to ban access to a social media platform within their jurisdiction as a means to compel compliance or force concessions from global social media platforms. The Government of India, for example, threatened to ban WhatsApp in 2018 following a string of lynchings resulting from viral rumors being spread via the messaging application. WhatsApp refused to accede to the government’s demands on key privacy provisions but did make alterations to the ways in which messages were labeled and forwarded within the app in response to government concerns. India also banned TikTok, WeChat, and a range of other Chinese apps in 2020. In 2018, the Indonesian government banned TikTok for several days on the basis that it was being used to share inappropriate content and blasphemy. In response, TikTok quickly acceded to the government’s demands and began censoring such content. The Trump administration threatened to ban TikTok in the United States over data privacy concerns unless the Chinese-owned company sold its U.S. operations. In 2017, Ukrainian President Petro Poroshenko signed a decree that blocked access to a number of Russian social media platforms on national security grounds. 

Banning access to entire platforms as a means to force concessions from companies is a blunt-force approach that is only likely to yield results for countries with massive markets of users. Far more frequently, bans on social media platforms have been used as a tool by authoritarian leaders to restrict access to information among their populations. 

Regulating social media in campaigning, particularly in a way intended to deter or mitigate the impact of disinformation, is far from coalescing around established and universally accepted good practices. As countries take legal and regulatory steps to address disinformation in the name of protecting democracy, the uncertainty and definitional vagueness of key concepts in this space has the potential to result in downstream implications for political and civil rights. Concerns about free speech, for example, are elevated when content is removed without any judicial review or appeals process. Critics point to the dangers of allowing unaccountable private social media companies and digital platforms to decide what content does or does not comply with the law. For example, if sanctions are severe, it might incentivize companies to overcorrect by removing permissible content and legitimate speech. The existence of robust appeals mechanisms is essential for preserving rights.