Complete Document - Gender Dimensions

Written by Victoria Scott, Senior Research Officer at the International Foundation for Electoral Systems Center for Applied Research and Learning

 

Around the world, women and people who challenge traditional gender roles by speaking out in male-dominated spaces—such as political leaders, celebrities, activists, election officials, journalists, or individuals otherwise in the public eye—are regularly subjected to biased media reporting, the spread of false or problematic content about them, and targeted character assaults, harassment, abuse, and threats.  Any woman, girl, or person who does not conform to gender norms and who engages in public and digital spaces is at risk, although the public may be most familiar with this behavior directed toward women leaders. Women who hold or seek positions of public leadership often find themselves facing criticism that has little to do with their ability or experience—like the criticism typically encountered by men in those same positions—and instead face gendered commentary on their character, morality, appearance, and conformity (or lack thereof) to traditional gender roles and norms. Their representation in the public information space is often defined by sexist tropes, stereotypes, and sexualized content.  While not a new challenge, this phenomenon is increasingly pervasive and has been fueled by technology. Although this type of online malice is often directed at women and lesbian, gay, bisexual, transgender and intersex (LGBTI) individuals in the public eye, any person who deviates from gender norms risks being exposed to this type of abuse.

For donors and implementers, understanding the intersection of gender and disinformation is imperative to designing and delivering comprehensive and effective programming to counter disinformation and hate speech and promote information integrity. Without considering the different ways in which women, girls, men, boys, and people with diverse sexual orientations and gender identities engage in the digital information environment and experience and interpret disinformation, donor and implementer efforts to counter disinformation will not reach the individuals who are among the most marginalized in their communities. The impact and sustainability of these interventions will therefore remain limited. Analyzing disinformation through a gender lens is imperative to designing and implementing counter-disinformation programs in a way that both recognizes and challenges gender inequalities and power relations and transforms gender roles, norms, and stereotypes. This approach is necessary if donors, implementers, and researchers hope to effectively mitigate the threat of disinformation.

An increasing body of research and analysis explores the role of gender in disinformation campaigns, including the gendered impacts of disinformation on individuals, communities, and democracies. While this research presents a compelling case for funders and implementers to view information integrity and counter-disinformation programming through a gender lens, current programming is often limited to interventions to prevent or respond to online gender-based violence or to strengthen women’s and girls’ digital or media and information literacy. These are important approaches to strengthening the integrity of online spaces and responding to the information disorder, but a greater range of programming is both possible and necessary. 

Highlight


Distinguishing online gender-based violence and gendered disinformation:

Gendered disinformation and online gender-based violence are concepts that are often conflated. According to the framing used throughout this guidebook, online gender-based violence can be considered a type of gendered disinformation (using gender to target the subjects of attack in false or problematic content), but gendered disinformation is broader than what online gender-based violence encompasses. Gendered disinformation reaches beyond gendered attacks carried out online to include harmful messaging that exploits gender inequalities, promotes heteronormativity, and deepens social cleavages. One reason for the frequent conflation of these terms may be that discussions of gender and disinformation typically rely on examples of gendered disinformation that are also examples of online gender-based violence. For instance, a common example is fake sexualized content (like sexualized deepfakes and photoshopped images or edited videos placing a specific woman’s face onto sexualized content). This example can be considered both online gender-based violence and gendered disinformation. However, there are also examples of gendered disinformation messages that are not necessarily categorized as online gender-based violence, for example sensationalized and hyper-partisan junk news stories designed to deepen existing ideological divisions and erode social cohesion.1 These two phenomena intersect, and both threaten the integrity of the information environment and full and equal participation in political, civic, and public spheres. It is important for counter-disinformation programming to not only prevent and respond to these direct attacks of harassment and abuse considered under the label of online gender-based violence, but also to prevent and respond to influence operations that exploit gender inequalities and norms in their messaging. 

 

1There are differing definitions of the term "gendered disinformation," and a variety of perspectives on what constitutes gendered disinformation and whether or how it is distinct from online gender-based violence, abuse, or harassment.  See e.g. review of existing definitions and distinctions in Jankowicz et al.’s Malign Creativity: How Gender, Sex, and Lies are Weaponized Against Women Online.  As scholars and practitioners continue to develop their thinking in this emerging field, these definitions and perspectives continue to evolve.

 

Explore further:

This section of the guidebook is intended to be a resource to assist donors, implementers, and researchers to apply a gender lens when investigating and addressing information integrity and disinformation.It will also assist funders and practitioners in integrating gender throughout all aspects of counter-disinformation programming.

The section begins by briefly outlining why counter-disinformation programming must be viewed through a gender lens.

The section then defines the term “gendered disinformation” and the gender dimensions of disinformation in each of its component parts (actor, message, mode of dissemination, interpreter, and risk).

The section closes with a look first at the current approaches to countering disinformation with gender dimensions and then at some promising new approaches for gender-sensitive counter-disinformation programming. While gender-sensitive programming and good practices are still emerging in the information integrity field, this section of the guidebook offers promising approaches based on known good practices in related fields.Specific examples of integrating gender into counter-disinformation interventions are also included throughout the guidebook’s thematic topics.

The onus of responding to and preventing gendered disinformation should not fall on the shoulders of subjects of gendered digital attacks, nor on those targeted or manipulated as consumers of false or problematic content.

Donors and implementers might wonder what makes gendered disinformation unique and different from other types of disinformation, why it is important to analyze the digital information landscape and any form of disinformation (regardless of whether it is specifically gendered disinformation) from a gender perspective, or why it is necessary to design and implement counter-disinformation programming with gender-specific considerations.  Answers to these questions include:

  • Disinformation that uses traditional gender stereotypes, norms, and roles in its content plays to entrenched power structures and works to uphold heteronormative political systems that maintain the political domain as that of cisgender, heterosexual men.
  • The means of accessing and interacting with information on the internet and social media differs for women and girls compared with men and boys.
  • The experience of disinformation and its impact on women, girls, and people with diverse sexual orientations and gender identities differs from that of cisgender, heterosexual men and boys.
  • Disinformation campaigns may disproportionately affect women, girls, and people with diverse sexual orientations and gender identities, which is further compounded for people with multiple marginalized identities (such as race, religion, or disability).

In designing and funding counter-disinformation activities, donors and implementers should consider the variety of forms that gendered disinformation, and gendered impacts of disinformation more broadly, can take. Counter-disinformation efforts that holistically address gender as the subject of disinformation campaigns and address women and girls as consumers of disinformation provide for multidimensional interventions that are effective and sustainable.

1.1 What are the gender dimensions of disinformation?

The intersection of information integrity challenges and gender is complex and nuanced. It includes not only the ways gender is employed in deliberate disinformation campaigns, but also encompasses the ways in which gendered misinformation and hate speech circulate within an information environment and are often amplified by malign actors to exploit existing social cleavages for personal or political gain. This intersection of gender and information integrity challenges will be referred to as “gendered disinformation” throughout this section.

Gendered disinformation includes false, misleading, or harmful content that exploits gender inequalities or invokes gender stereotypes and norms, including to target specific individuals or groups; this description refers to the content of the message.  Beyond gendered content, however, other important dimensions of gendered disinformation include: who produces and spreads problematic content (actor); how and where problematic content is shared and amplified, and who has access to certain technologies and digital spaces (mode of dissemination); who is the audience that receives or consumes the problematic content (interpreter); and how the creation, spread, and consumption of problematic content affects women, girls, men, boys, and people with diverse sexual orientations and gender identities, as well as the gendered impacts of this content on communities and societies (risk)1.  

By breaking down the gender dimensions of information integrity challenges into their component parts – actor, message, mode of dissemination, interpreters, and risk – we can better identify different intervention points where gender-sensitive programming can make an impact2

Below we illustrate the ways gender influences each of these five component parts of disinformation, hate speech, and viral misinformation.

Amplification of Viral Misinformation and Hate Speech

Graphic: The amplification of viral misinformation and hate speech through individual or coordinated disinformation, IFES (2019)

A. Actor

As with other forms of disinformation, producers and sharers of messages of disinformation with explicit gendered impacts may be motivated by ideology or a broader intent to undermine social cohesion, limit political participation, incite violence, or sow mistrust in information and democracy for political or financial gain. People who are susceptible to becoming perpetrators of gendered disinformation may be lone actors or coordinated actors, and they may be ideologues, members of extremist or fringe groups, or solely pursuing financial gain (such as individuals employed as trolls). Extrapolating from the field of gender-based violence, some of the risk factors that may contribute to a person’s susceptibility to creating and spreading hate speech and disinformation that exploits gender could include: 

  • At the individual level: attitude and beliefs; education; income; employment; and social isolation 
  • At the community level: limited economic opportunities; low levels of education; and high rates of poverty or unemployment
  • At the societal level: toxic masculinity or expectations of male dominance, aggression, and power; heteronormative societal values; impunity for violence against women; and patriarchal institutions

Gender-transformative interventions that seek to promote gender equity and healthy masculinities, strengthen social support and promote relationship-building, and increase education and skills development could build protective factors against individuals becoming perpetrators of gendered hate speech and disinformation. Similarly, interventions that seek to strengthen social and political cohesion, build economic and education opportunities in a community, and reform institutions, policies, and legal systems could contribute to these protective factors.  In addition to identifying interventions to prevent individuals from becoming perpetrators of disinformation, practitioners must also acknowledge the complex discussions around the merits of sanctioning actors for perpetrating disinformation and hate speech.  

It is worth noting that the present study did not identify any research or programming investigating women’s potential role as perpetrators of disinformation.  While it is widely known that the vast majority of perpetrators of online gender-based violence are men, researchers do not yet know enough about individuals who create and spread disinformation to understand whether, to what extent, or under what conditions women are prevalent actors.  When considering the motivations and risk factors of actors who perpetrate disinformation, it is important to first understand who those actors are.  This is an area that requires more research.

B. Message

Researchers and practitioners working at the intersection of gender and information integrity challenges have largely focused on the gender dimensions of disinformation messages. The creation, dissemination, and amplification of gendered content that is false, misleading, or harmful has been acknowledged and investigated more than other aspects of disinformation. The gendered content of disinformation campaigns typically includes messages that:

  • Directly attack women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” (as individuals or as groups)
  • Exploit gender roles and stereotypes, exacerbate gender norms and inequalities, promote heteronormativity, and generally increase social intolerance and deepen existing societal cleavages

There are myriad examples of disinformation in the form of direct attacks on women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” online. This can include sexist tropes, stereotypes, and sexualized content (e.g. sexualized deepfakes or non-consensual distribution of intimate images3).  Some of these cases—such as those targeting prominent political candidates and leaders, activists, or celebrities—are well-known, having garnered public attention and media coverage. 

Highlight


In 2016, leading up to the parliamentary elections in the Republic of Georgia, there was a disinformation campaign that targeted women politicians and a woman journalist in a video allegedly showing them engaged in sexual activity. The videos, which were shared online, included intimidating messages and threats that the targets of the attack should resign or additional videos allegedly featuring them would be released.

In another Georgian example, prominent journalist and activist, Tamara Chergoleishvili, was targeted in a fake video that allegedly showed her engaged in sexual activity with two other people. One of the people who appeared in the video with Chergoleishvili is a man who was labelled as “gay” and suffered consequences resulting from homophobic sentiments in Georgia.

Examples such as these seem sensationalized and extraordinary, but many women in the public eye encounter shocking instances of attacks like those described above. Similar cases of sexualized distortion have emerged against women in politics globally. 

The potential impact of this type of gendered disinformation is to exclude and intimidate the targets, to discourage them from running for office, and to otherwise disempower and silence them.  Perpetrators can also use these attacks to encourage their targets to withdraw from politics or to participate in ways that are directed by fear; to shift popular support away from politically-active women, undermining a significant leadership demographic, manipulating political outcomes, and weakening democracy; and to influence how voters view particular parties, policies, or entire political orders. Such attacks can also be used for gender policing (checking women and men who may be violating the gendered norms and stereotypes that govern their society). 


Sources: Coda Story, BBC, Radio Free Europe/Radio Liberty    

But while some cases of these attacks targeting prominent figures may be well-known to the public, many more cases of such gendered attacks online take place in a way that is both highly public and surprisingly commonplace. In 2015, a report from the United Nations Broadband Commission for Digital Development’s Working Group on Gender indicated that 73 percent of women had been exposed to or experienced some form of online violence, and that 18 percent of women in the European Union had experienced a form of serious internet violence at ages as young as 15 years.  A 2017 Pew Research Center study conducted with a nationally representative sample of adults in the U.S. found that 21 percent of young women (aged 18 to 29 years) reported they had been sexually harassed online.   In a recently released 2020 State of the World’s Girls report, Plan International reported on the findings from a survey conducted with more than 14,000 girls and young women aged 15-25 across 22 countries. The survey found that 58 percent of girls reported experiencing some form of online harassment on social media, with 47 percent of those respondents reporting that they were threatened with physical or sexual violence.  The harassment they faced was attributed to simply being a girl or young woman who is online (and compounded by race, ethnicity, disability, or LGBTI identity), or backlash to their work and  content they post if they are activists or outspoken individuals, “especially in relation to perceived feminist or gender equality issues.”  These direct attacks are not typically talked about as unusual or surprising; rather, the risk of gendered attacks online is often considered a risk that women and girls should expect when choosing to engage in digital spaces, or—in the case of politically active women—“the cost” of doing politics.


The contours of the digital information environment are characterized in part by this type of abuse, and these experiences have largely come to be expected by women and girls and tolerated by society. Though much of the time this content goes unreported, when survivors or targets of these attacks have brought complaints to law enforcement, technology companies and social media platforms, or other authorities, their concerns often go unresolved. They are commonly told that the content does not meet the standard for criminal prosecution or the standard of abuse covered by a platform’s code of conduct, advised to censor themselves, to go offline (or, in the case of minors, to take away their daughters’ devices), or told that the threats are harmless.

Highlight


Because of the ways that identity can be weaponized online, and the intersectional nature of gendered abuse, women, girls, and people with diverse sexual orientations and gender identities who also have other marginalized identities (such as race, religion, or disability) experience this abuse at higher rates and in different ways.

Beyond developing and deploying direct gender-based attacks against individuals or groups, disinformation actors may exploit gender as fodder for additional content. Such content may exploit gender roles and stereotypes, exacerbate gender norms and inequalities, enforce heteronormativity, and generally increase social intolerance and deepen existing societal cleavages. Examples include content that glorifies hypermasculine behavior in political leaders, feminizes male political opponents, paints women as being ill-equipped to lead or hold public office on the basis of gender stereotypes and norms, engages in lesbian-baiting, conflates feminist and LGBTI rights and activism with attacks on “traditional” families, and displays polarizing instances (real or fabricated) of feminist and LGBTI activism or of anti-women and anti-LGBTI actions to stoke backlash or fear. This type of content can be more nuanced than direct attacks and therefore more resistant to programming interventions.

C. Mode of Dissemination

Although gendered hate speech, viral misinformation, and disinformation are not new or exclusively digital challenges, the tools of technology and social media have enabled broader reach and impact of disinformation and emboldened those lone individuals and foreign or domestic actors who craft and disseminate these messages. Layering onto the range of harmful content that already exists in the information environment, disinformation campaigns designed to build upon existing social cleavages and biases can deploy a range of deceptive techniques to amplify gendered hate speech to make these gender biases seem more widely held and prevalent than they are.

Gendered hate speech and misinformation can have immense reach and impact even in the absence of a coordinated disinformation campaign, as this content circulates in the digital information space through organic engagement.  While much of this content is generated and circulated in mainstream digital spaces, there is also a robust network of male-dominated virtual spaces, sometimes referred to collectively as the “manosphere,” where these harmful gendered messages can garner large bases of support before jumping to mainstream social media platforms.  The “manosphere” includes online blogs and message and image boards hosting a variety of anonymous misogynistic, racist, anti-Semitic, and extremist content creators and audiences (“men’s rights,” “involuntarily celibate,” and other misogynist communities intersect with the “alt-right” movement in these spaces)4.

Over time, the community of men who participate in these information spaces have developed effective strategies to keep these messages in circulation and to facilitate their spread from anonymous digital forums with little moderation to mainstream (social and traditional) media. Individuals who wish to disseminate these harmful messages have found ways to circumvent content moderation (such as using memes or other images, which are more difficult for content moderation mechanisms to detect) and have developed tactics to inject this content into the broader information environment and to deploy coordinated attacks against specific targets (individuals, organizations, or movements).

This is in part what makes gender an attractive tool for disinformation actors. The “manosphere” provides ready-made audiences who are ripe for manipulation and activation in the service of a broader influence operation, and these communities have a toolbox of effective tactics for disseminating and amplifying harmful content at the ready.  A known disinformation strategy includes the infiltration of existing affinity groups to gain group trust and seed group conversations with content intended to further a goal of the disinformation actor. Should disinformation actors manipulate these anti-women communities, they may successfully turn the energies of the “manosphere” against a political opponent, cultivating a troll farm with community members willing to carry out their work for free.

Highlight


In November 2020, Facebook announced its takedown of a network of profiles, pages, and groups engaged in coordinated inauthentic behavior. The disinformation campaign, which originated in Iran and Afghanistan, targeted Afghans with a focus on women as consumers of the content shared. Almost half of the profiles on Facebook and more than half of the accounts on Instagram in the network were presented as women’s accounts. A number of pages in the network were billed as being for women. The women-oriented content shared across the network included a focus on content promoting women’s rights, as well as highlighting the Taliban’s treatment of women. The Stanford Internet Observatory’s analysis of the network indicated that additional content associated with the network was critical of the Taliban and noted that “[i]t is possible the intent [of the women-focused content] was to undermine the peace negotiations between the Afghan government and the Taliban; the Taliban is known for restricting women’s rights.”

The potential impact of gendered disinformation like this is to deepen societal divides and exploit ideological differences, compromising social cohesion and undermining political processes.

 

Source: Stanford Internet Observatory

 

D. Interpreters

Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters, or consumers or recipients, of disinformation is a tactic that can exacerbate existing societal cleavages – likely in ways that politically or financially benefit creators and disseminators of these messages. This can include targeting women and people with diverse sexual orientations and gender identities with disinformation designed to exclude them from public or political life (e.g., in South Africa, spreading false information that people wearing fake nails or nail polish cannot vote in an election). In other cases, targeting these groups with disinformation may be part of a broader campaign to create polarizing debates and widen ideological gaps. For example, disinformation campaigns might inflame the views of feminists and supporters of women’s and LGBTI rights, as well as the views of those who are anti-feminist and who oppose women’s and LGBTI equality. Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters of disinformation may amplify or distort divergent views to undermine social cohesion.

E. Risk

The prevalence of technology and social media has brought new attention to the harms inflicted–especially on women–by information integrity challenges, including disinformation campaigns. Regardless of the individual motivations of the actors who create and disseminate gendered hate speech and disinformation, the gendered impacts of disinformation are typically the same:

  • Exclusion of women and people with diverse sexual orientations and gender identities from politics, leadership, and other prominent roles in the public sphere through their disempowerment, discrimination, and silencing; and
  • Reinforcement of harmful patriarchal and heteronormative institutional and cultural structures.

Highlight


Gender-sensitive programming "attempt[s] to redress existing gender inequalities," while gender-transformative programming "attempt[s] to re-define women and men's gender roles and relations".

While gender-sensitive programming aims to "address gender norms, roles and access to resources in so far as needed to reach project goals," gender-transformative programming aims to "[transform] unequal gender relations to promote shared power, control of resources, decision-making, and support for women's empowerment.

 

Source: UN Women, Glossary of Gender-related terms and Concepts

Harmful gendered content and messaging that seeks to deter women from entering political spaces and exploit social cleavages has become an expected, and in some cases accepted, part of the digital landscape. There are also implicitly gendered impacts of any form of disinformation campaign, as women may be the consumers or interpreters of any false and problematic content. Disinformation may also have a disproportionate effect on women and girls due to such factors as lower levels of educational attainment, media and information literacy, self-confidence, and social support networks, as well as fewer opportunities to participate in programming designed to build resilience against disinformation due to such factors as cultural norms and household and family care responsibilities. These are only a small sampling of the factors that likely cause women and girls to be disproportionately affected by disinformation, and result from broader gender inequalities such as unequal access to and control over resources, decision-making, leadership, and power. For this reason, effective counter-disinformation programming must address all aspects of the disinformation threat through designing and funding programming that is at minimum gender-sensitive, and ideally gender-transformative.

The gender dimensions of disinformation not only affect women and girls, but also people with diverse sexual orientations and gender identities, as well as people with other intersecting, marginalized identities. Due to limited relevant research and programming, there is minimal data available on this subject (a problem in and of itself), but members of the LGBTI population, as well as women and girls who have other marginalized identities, are targeted disproportionately by online harassment and abuse and likely also by disinformation campaigns. It is imperative to consider the differential impact of disinformation on women, girls, and people with diverse sexual orientations and gender identities depending on other aspects of their identity (such as race, religion, or disability). They may be targeted in different ways in the digital information space than individuals who do not share these marginalized identities and may suffer more significant consequences from disinformation campaigns.

The next two sections of the guide further explore two significant gendered impacts of disinformation:

  • Silencing women public figures and deterring women from seeking public roles
  • Undermining democracy and good governance, increasing political polarization, and expanding social cleavages

2.1 Silencing women public figures and deterring women from seeking public roles

As the internet and social media have increasingly become major sources of information and news consumption for people across the globe, women in politics are turning to these mediums to reach the public and share their own ideas and policies as an alternative to often biased media coverage. Many women—typically having limited access to funding, small networks, little name recognition, and less traditional political experience and ties than men in politics—note that their social media presence is integral to their careers and credit these platforms with giving them greater exposure to the public, as well as the ability to shape their narratives and engage directly with supporters and constituents. However, they also often find themselves the subjects of alarming amounts of gendered disinformation aimed at delegitimizing and discrediting them and discouraging their participation in politics.

According to research conducted by the Inter-Parliamentary Union with 55 women parliamentarians across 39 countries, 41.8 percent of research participants reported that they had seen “extremely humiliating or sexually charged images of [themselves] spread through social media.” Not only do such experiences discourage individual women politicians from continuing in politics or running for reelection (either for concerns over their safety and reputation or those of their families), but they also have a deleterious effect on the participation of women in politics across entire societies, as women are deterred from entering the political field by the treatment of women before them.

“Research has shown that social media attacks do indeed have a chilling effect, particularly on first-time female political candidates. Women frequently cite the ‘threat of widespread, rapid, public attacks on their personal dignity as a factor deterring them from entering politics.’”

--(Anti)Social Media: The Benefits and Pitfalls of Digital for Female Politicians, Atalanta

Although there has been a recent increase in research investigating women politicians’ experiences with gendered disinformation in the digital information space and social media5, this phenomenon is also experienced by women journalists, election officials, public figures, celebrities, activists, online gamers, and others. Women who are the subjects of disinformation, hate speech, and other forms of online attacks may be discriminated against, discredited, silenced, or pushed to engage in self-censorship.

What may be even more impactful is the pernicious effects of these disinformation campaigns on women and girls who witness these attacks on prominent women. Seeing how women public figures are attacked online, they are more likely to be discouraged and disempowered from entering the public sphere and from participating in political and civic life themselves. The subtext of these threats of harm, character assassinations, and other forms of discrediting and delegitimizing signals to women and girls that they do not belong in the public sphere, that politics, activism, and civic participation were not designed for them, and that they risk violence and harm upon entering these spaces.

2.2 Undermining democracy and good governance, increasing political polarization, and expanding social cleavages

“When women decide that the risk to themselves and their families is too great, their participation in politics suffers, as do the representative character of government and the democratic process as a whole.”

--Sexism, Harassment and Violence against Women Parliamentarians, IPU

“Women’s equal participation is a prerequisite for strong, participatory democracies and we now know that social media can be mobilized effectively to bring women closer to government – or push them out.”

--Lucina Di Meco, Gendered Disinformation, Fake News, and Women in Politics

Beyond its impacts on women, girls, and people with diverse sexual orientations and gender identities as individuals and communities, disinformation campaigns that use patriarchal gender stereotypes or norms, use women as targets in its content, or target women as consumers undermine democracy and good governance. As scholar and political scientist Lucina Di Meco notes, inclusion and equal, meaningful participation are prerequisites for strong democracies. When disinformation campaigns hamper that equal participation, elections and democracies suffer.

Disinformation campaigns can use gender dimensions to increase political polarization and expand social cleavages simply by reinforcing existing gender stereotypes, magnifying divisive debates, amplifying fringe social and political ideologies and theories, and upholding existing power dynamics by discouraging the participation of women and people with diverse sexual orientations and gender identities.  These actions serve to exclude members of marginalized communities from political processes and democratic institutions, and in so doing, chip away at their meaningful participation in their democracies and representation in their institutions. Because the voice and participation of citizens are essential to building sustainable democratic societies, silencing the voices of women, girls, and people with diverse sexual orientations and gender identities weakens democracies, making gendered disinformation not just a “women’s issue” and tackling it not just the mandate of “inclusion programming,” but imperative to counter-disinformation programming and efforts to strengthen democracy, human rights, and governance around the globe. A plurality of experiences and points of view must be reflected in the way societies are governed in order to ensure “participatory, representative, and inclusive political processes and government institutions.”

Current approaches to countering gendered disinformation and addressing gender dimensions of disinformation

The field of gender-sensitive counter-disinformation programming is still emerging, and programming that explicitly centers the problem of gendered disinformation and gendered impacts of disinformation is rare. Currently, from the democracy to gender to technology sectors, there is limited, albeit growing, awareness and understanding of the nuanced and varied ways that disinformation and gender programming can intersect.  To illustrate the variety of ways in which a gender lens can be brought to bear on counter-disinformation programming, programmatic examples that include gender elements are mainstreamed in the thematic sections of this guidebook. To complement these examples, this section applies what works in related programming areas to outline ways in which gender can be further integrated into counter-disinformation programming.  For example, promising practices for gender-sensitive counter-disinformation programming can be drawn from good practices in development or humanitarian aid programs focused on gender-based violence and gender equity. 

Focused on direct attacks of online gender-based violence

Existing programming to counter gendered disinformation is largely focused on preventing, identifying, and responding to direct attacks targeting women or people with diverse sexual orientations and gender identities as the subjects of gendered disinformation. These programs are often focused narrowly on women politicians and journalists as the targets of these attacks.  This type of programming includes a variety of responses, such as reporting and removal from platforms, fact-checking or myth-busting, digital safety and security training and skills-building, or media and information literacy for women, girls, and LGBTI communities.  Similarly, the existing body of research identified as focusing on gendered disinformation is largely centered around diagnosing these direct attacks, the motivations of their perpetrators, and the harms of such attacks.  While these are critical areas to continue funding for programming and research, these interventions are necessary but not sufficient. Donors and implementers must also pursue programming that addresses other dimensions of gender and disinformation.  

To better inform the design and delivery of effective and sustainable interventions to counter gendered disinformation, as well as to mitigate the gendered impacts of disinformation more broadly, researchers must also broaden their focus to investigate such topics as: 

  • The different ways in which women, girls, men, boys, and people with diverse sexual orientations and gender identities engage with the digital information ecosystem
  • The risk factors for and protective factors against perpetrating or being targeted by gendered disinformation
  • Women as perpetrators of—or otherwise complicit parties to—disinformation, hate speech, and other forms of harmful online campaigns

Informative programming in this space might include digital landscape mapping, gender and technology assessments to identify gaps in access and skills, focus group discussions, community engagement, and public opinion research. This type of programming will enable practitioners to better understand the diverse ways in which these different groups interact with the digital information space, may be vulnerable to being targeted by disinformation or susceptible to perpetrating disinformation, and are affected by the impacts of disinformation. 

 

More reactive than proactive, more ad hoc than systematic

As noted in other sections of the guidebook, one way to characterize counter-disinformation programming is to look at approaches as proactive or reactive.  

Proactive programming refers to interventions which seek to prevent the creation and spread of gendered disinformation before it enters the digital information space. It might also include efforts to strengthen the resilience of those likely to be targeted by disinformation or those susceptible to becoming perpetrators of gendered disinformation.  This can include a broad array of interventions, such as media and information literacy, confidence- and resilience-building, gender equality programming, civic and political participation programming, and education, workforce development, and livelihoods programming. 

Reactive programming might include interventions which seek to respond to gendered disinformation after it has already been dispatched, such as reporting content to platforms or law enforcement for removal or investigation or fact-checking and responsive messaging to counter false or problematic content.  

Some gender-sensitive counter-disinformation programming may be both reactive and proactive, as they are interventions that both respond to the creation and spread of discrete cases of gendered disinformation and aim to deter would-be perpetrators of gendered disinformation. Examples include platform- or industry-wide policies and approaches to identification, tagging, or removal of content, legislation to criminalize hate speech, online gender-based violence, and other harmful or problematic content, or regulation of platform responses to gendered disinformation.

Reactive approaches tend to be more ad hoc and immediate or short-term by nature, attempting to stamp out discrete disinformation campaigns or attacks as they emerge.  Some proactive approaches are also ad hoc in nature, such as programs with one-off training sessions, classes, mobile games, or other toolkits for digital safety and security or media and information literacy.  However, many proactive approaches (and some responses which are both reactive and proactive) are more systematic or long-term, aiming to transform gender norms, increase democratic participation, create long term social and behavior change, create safer spaces for women, girls, and people with diverse sexual orientations and gender identities online, and build the resilience of individuals, communities, and societies to withstand the weight of disinformation attacks and campaigns.

Much of the existing programming to counter gendered disinformation is reactive and ad hoc, designed to respond to gendered disinformation and address its impacts after it has already been pushed into the digital environment.  Reactive interventions, such as content tagging or removal and fact-checking, myth-busting, or otherwise correcting the record in response to direct attacks, are generally insufficient to reverse the harms caused by gendered disinformation, from reputational damage and self-censorship to withdrawal from public and digital spaces and sowing seeds of distrust and discord. 

Design Tip


However, as scholars and practitioners in this field will note, much of the damage has already been done by the time responses to gendered disinformation are deployed6.

As is the case with most gender-related programming, while there are important uses for both reactive and proactive programming to counter gendered disinformation, in order to ensure that disinformation prevention and response programming is both effective and sustainable, it is imperative that the donor and implementer communities think about proactive, not just reactive, gender-sensitive counter-disinformation programming.  A major challenge, however, is that gender-transformative programming and programming designed to strengthen the protective factors against disinformation can typically be measured in generational shifts, rather than the two- to five-year periods most donor funding streams would require.  Accommodating this holistic approach would require donors to consider rethinking the typical structure of their funding mechanisms and reporting requirements.

Promising approaches to gender-sensitive counter-disinformation programming

Establish institutional and organizational protocols

Several recent research studies7 investigating the prevalence and impact of online harassment and abuse of (women) journalists in the United States and around the world have found that many subjects of such attacks do not report these incidents to their employers or other authorities out of concern that nothing can or would be done in response, or for fear of personal or professional repercussions from reporting.  In cases where they do report these incidents to their employers, the organizations may not take action or may handle reports inconsistently and inadequately.  A key recommendation that surfaced from these findings is to establish institutional and organizational protocols, including specific policies and practices to support those attacked and to address reports of attacks.

Based on this research and work in the area of online gender-based violence, donors and implementers should support institutions and organizations such as political parties or campaigns, EMBs, news and media outlets, and activist or advocacy organizations to establish comprehensive institutional protocols to prevent attacks and respond to reports, including: 

  • Providing appropriate digital safety and security training and education about online harassment
  • Establishing clear and accessible reporting mechanisms that ensure the safety and protection of survivors of online violence and gendered disinformation, as well as their ability to freely participate in digital spaces
  • Ensuring systematic and consistent investigation of reports of attacks and referrals to appropriate authorities
  • Establishing a variety of responses that institutions will offer to support their staff or members who are subjects of attacks (e.g. screening and documenting threats, reporting to platforms and/or authorities, coordinating counter-messaging, and sharing guidance and providing support to staff or members who choose to block or confront the perpetrators of their attacks)
  • Providing appropriate resources and referrals following a report, such as physical security, psychological support, legal support, and personal information scrubbing services

In order to determine what protocols are needed, and to be responsive to the lived experiences of women and people with diverse sexual orientations and gender identities at work, programming should allow time and funding for institutions to survey their staff about their experiences and involve staff in decisions about the protocols, policies, and practices.

This approach can be adapted from the journalism and media industry to other organizations and institutions where gendered disinformation attacks are prevalent, installing policies and practices to ensure supportive, consistent, and effective responses to direct attacks.  This intervention can contribute to combatting the impunity of perpetrators of gendered disinformation attacks, as well as the silencing, self-censorship, and discouragement to participate in the political or public spheres by the subjects of these attacks.

 

Coordinate prevention, response, and risk mitigation strategies and establish appropriate case management and referral pathways

Gendered disinformation, much like gender-based violence, is a challenge which requires the involvement of stakeholders across multiple sectors and at multiple levels.  Prevention and response efforts to address gendered disinformation depend on cooperation between the public and private sectors, including technology firms and media outlets (especially social media and digital communications platforms), law enforcement and justice authorities, civil society, psychosocial and mental health providers, and other health providers in cases where technology-fueled disinformation efforts may result in physical harm.  Further, gendered disinformation risk mitigation efforts also depend on cooperation and information sharing between these stakeholders and international- and national-level policymakers (to inform legal and regulatory reform), civil society actors (to advocate for appropriate, effective, and sustainable interventions), the education sector (to inform curricula related to critical thinking and analytical skills, media and information literacy, digital safety), and the security sector in cases where incidents of gendered disinformation may be part of a coordinated campaign by malign foreign or domestic actors.

Donors and implementers should look to the robust experience of the humanitarian aid sector, specifically that of gender-based violence (GBV) prevention and response coordinators and service providers, to develop a coordinated approach to gender-sensitive disinformation interventions.  Specifically, funders and implementers can adapt and draw guidance from the Handbook for Coordinating Gender-based Violence Interventions in Emergencies and model national-level coordination networks and protocols on relevant elements of the approach detailed in this handbook to implement gender-sensitive responses to disinformation. 

Two important elements of a coordinated approach to GBV interventions in emergencies to carry over when adapting this approach are case management and the establishment and use of appropriate referral pathways.  Establishing appropriate case management in this scenario might entail: 1) the stakeholder who receives a complaint of gendered disinformation (for instance, a social media platform or local police) conducts a standard intake process with the person reporting; and 2) the stakeholder who receives the complaint or report uses an established referral pathway to refer the reporting party to a local civil society organization (for instance, local women’s organizations that are experienced GBV service providers) for case management and additional referrals as appropriate. Referring the reporting party to an established case manager that is trained to work with targets or survivors of gendered disinformation and networked with the other stakeholders can streamline supportive services for the reporting party by establishing one primary point of contact responsible for interfacing with them. The case manager organization would be responsible for communicating the various response and recourse options available, providing referrals to appropriate service providers in the referral network and referring cases to appropriate members of the coordination network for follow-up, and (in cases of a direct attack) providing support to the target or survivor of the attack.  

Establishing referral pathways in this scenario would involve identifying or establishing appropriate organizations or institutions responsible for different aspects of responding to reports of gendered disinformation, ensuring all coordination network organizations and institutions have access to the referral pathways, enabling them to receive initial reports of incidents and refer reporting parties to a local case manager organization, and case managers informing the reporting party about available services and avenues to pursue different interventions or recourse.  If the reporting party gives permission, the case manager should also connect them with relevant services in the referral pathway.

Donors should consider supporting: 

  • A mapping or sectoral analysis of relevant stakeholders 
  • A convening of practitioners and experts to discuss the gendered disinformation landscape and needs
  • Providing training and sensitization to law enforcement authorities, legal practitioners, and policymakers on gender, online and technology-facilitated gender-based violence,  and disinformation
  • The establishment of a coordination network that includes social media and digital communications platforms, law enforcement and justice authorities, civil society, psychosocial and mental health providers, and other health providers
  • The development of clear roles and responsibilities of network members, for example establishing case manager organizations with support from civil society and governments
  • The development of response protocols to guide the coordination, management, prevention, and response efforts of the network, including the development of a case management methodology and referral pathway

This intervention can contribute to the delivery of a holistic, survivor-centered approach to gender-sensitive counter-disinformation prevention and response programming, as well as combat impunity for perpetrators by institutionalizing a consistent and systematic approach of reporting claims to platforms and law enforcement authorities for investigation and recourse.

 

Build networks and communities of supporters and deploy counterspeech

“Don’t feed the trolls” is a common refrain of warning offered to those who find themselves the subjects of gendered disinformation.  Experts used to think the best way to counter direct attacks targeting someone due to their gender and exploiting gendered norms and stereotypes was to simply ignore the attacks.  Yet, recently, the dialogue around this issue has begun to evolve.  

While some still advise not to “feed the trolls”—in other words, to simply ignore or to block, report, and then ignore the harmful content hurled at and about them online—others who work with the subjects of these attacks, as well as those who have themselves been the subjects of such attacks, have begun to acknowledge the shortcomings of this approach. They point to the empowerment that subjects of gendered disinformation and those who witness it may derive from speaking up and calling out the attacks (or seeing others do so), and the need for outing misogyny when it rears its head in digital spaces.  Research conducted as part of the Name It. Change It. project also indicates that women politicians who directly respond to sexist attacks and call out the misogyny and harassment or abuse they face online (or when a third party does so on their behalf) are able to regain credibility with voters who they may have initially lost as a result of having been attacked.

It is important to clearly state that, while there are ongoing and evolving discussions on this topic about how best individuals can or ‘should’ respond to gendered disinformation, it is not the responsibility of those who find themselves the subjects of such attacks to respond in any one way, if at all, nor to prevent the occurrence or take steps to mitigate the risks of these attacks.  Those suffering gendered disinformation attacks should not be expected to shoulder the burden of solving this problem.  Rather, it is the responsibility of a variety of stakeholders—including the technology platforms, government institutions and regulatory bodies, political parties, media organizations, and civil society—to establish and implement effective approaches and mechanisms to prevent and respond to gendered disinformation, as well as to work to address its root causes and to mitigate its long-lasting and far-reaching impacts.  Nevertheless, best practice adapted from gender-based violence response programming indicates that when the subject of gendered disinformation reports an incident, they should be presented with information on the available options for response and recourse and any potential benefits and further risks associated with those options.

One such possible response to gendered disinformation is counterspeech, which the Dangerous Speech Project defines as “any direct response to hateful or harmful speech which seeks to undermine it,” also noting, “There are two types of counterspeech: organized counter-messaging campaigns and spontaneous, organic responses.” Individuals who have been targeted by harmful content online might choose to engage in counterspeech themselves, or they might choose to enlist the support of their own personal and professional community or an online network of supporters to craft and deploy counterspeech publicly on their behalf or privately with messages of support (for example via email or on a closed platform).  The effectiveness of counterspeech is difficult to measure, in part because those who engage in counterspeech may have different goals (ranging from changing the attitude of the perpetrator to limiting the reach of the harmful content to providing the subject of an attack with supportive messages). However, emerging research and anecdotal evidence indicates that crafting and deploying counterspeech (whether by the subjects of these attacks, their institutions or organizations, or a broader online community of supporters) is a promising practice in responding to gendered disinformation.8

A variety of positive outcomes to counterspeech have been referenced, including:

  • delivering a sense of empowerment back to the targets of gendered disinformation attacks, allowing them to take back their narrative
  • increasing the likelihood of positive, civil, or “pro-social” comments and/or decreasing the likelihood of negative, uncivil, or “anti-social” comments
  • drowning out harmful content with supportive counterspeech, both on public social media posts and in private communications
  • demonstrating to those sharing harmful content that their language or message is not accepted

 

Social media monitoring can play an important role in countering gendered disinformation, and can be linked to the coordination and deployment of counterspeech activities in response to gendered disinformation attacks.

Researchers, practitioners, and civil society actors are increasingly engaging in social media monitoring activities to inform their understanding of gendered disinformation, to identify entry points to disrupt gendered disinformation, viral misinformation, and hate speech, and to advocate for laws or regulations that are responsive to the growing challenges of online gender-based violence and the spread of harmful gendered content online. 

Social media monitoring in the context of gendered disinformation can be used to serve two primary functions: 

  • To listen to speech taking place across the digital information environment, monitor sentiment, and provide an important window into the creation, dissemination, and amplification of harmful content 
  • To monitor the adherence of political actors, media, and public institutions to legal and regulatory guidance and codes of conduct around disinformation and hate speech, and to monitor technology platforms’ enforcement of their community standards, terms of use, or codes of conduct

An early step donors, researchers, and implementors should take is to create methodologies and tools to monitor social media and collect data on gendered disinformation, hate speech, and viral misinformation. These should be adapted to local contexts and applied in research and programming in order to mount an effective effort to counter gendered disinformation. In 2019, CEPPS released a social media analysis tool to monitor online violence against women in elections. The tool includes a step-by-step guidance on how to identify trends and patterns of online violence, including: identifying the potential targets to monitor (i.e. women politicians, candidates, activists); defining the hate speech lexicon to monitor; choosing which social media platforms to monitor; selecting the research questions; running the analysis using data mining software; and then analyzing the results. A full description of the step-by-step process can be found in CEPPS' Violence Against Women in Elections Online: A Social Media Analysis Tool.

NDI has also developed a methodology for effectively scraping and analyzing such data in its reports "Tweets that Chill" and “Engendering Hate” with Demos through research in five countries. An essential step of the methodology is creating a lexicon in local languages of gender-based harassing language and the political language of the moment through workshops with local women’s rights organizations and civic technology organizations. 

Some of the key lessons from this research include:

  • Contextually- and linguistically-specific lexicons of online violence must be created and then evolve: “Across all case study countries, workshop participants highlighted the fluid and evolving nature of language and brainstormed ways to account for this nuance in the study methodology. For example, NDI learned from the Colombia workshop that violent language in Spanish varied across Latin America, with both Colombia-specific and words from other parts of the region being used within the country. In Indonesia, religious words or phrases were used, complicating and heightening the online violence by invoking religious messages at the same time. In Kenya, workshop participants noted that a number of violent words/phrases that were in common usage in spoken Swahili, had not yet made it into written text online on Twitter. These varied lessons point to the need for contextually- and linguistically-specific lexicons that can be continuously refreshed, modified, and implemented with human coders working alongside computer algorithms.” (excerpted from “Tweets that Chill”)

 

  • Attention to minority communities and intersecting identities is essential: “Online [violence against women in politics] is varied and contextual, as it differs from country to country and culture to culture. However, it is also the case that the expressions used and impacts of online violence can vary significantly between and among communities within the same country. For this reason, it is important to intentionally include and consider historically marginalized communities among women (e.g. women with disabilities, LGBTI women, and female members of religious and ethnic minorities) when exploring the phenomenon of online [violence against women in politics]. During the Colombia workshop, female representatives from the deaf community shared that the violence they faced was not in text, but through the uploading of violent GIFs and/or video clips in sign-language. It was explained that this delivery mechanism was particularly effective in conveying threat and insecurity because, for the majority of the members of the deaf community in Colombia, sign language is their first language, and the targeting was therefore unmistakable. Understanding that the kinds of threats and modes of online violence can differ substantially when targeting different marginalized communities indicates that further work is required to create relevant lexicons.” (excerpted from “Tweets that Chill”)

 

  • Center Local Expertise: “How gendered disinformation is framed and spreads across a network varies greatly according to context. Identifying or mitigating gendered disinformation cannot be successful without the central involvement and direction of local experts who understand the subtleties of how gendered disinformation may be expressed and where it is likely to arise and when. Platforms should support the work of local experts in identifying and combating gendered disinformation, for instance through the provision of data access or the trialing of potential responses through changes to platform design. Automated systems for identifying gendered disinformation are unlikely to have high levels of accuracy - though if employed, should be employed transparently and overseen by local experts.” (excerpted from “Engendering Hate”)

 

The Legal and Regulatory chapter section 6.2 on building capacity to monitor violations and the Election Monitoring chapter explore these concepts further.

Seemingly in response to what many perceive to be a lack of adequate interventions by policymakers and technology platforms to address the problem of gendered disinformation, a variety of NGOs, civil society, and advocacy organizations have designed interventions to train likely targets of these digital attacks (as well as their employers and allies and bystanders) to develop and implement an effective counterspeech campaign, while others have established online communities of supporters who are ready to support the targets of these attacks with counterspeech efforts (among other supportive services such as monitoring the digital space where the attack is taking place and assisting the target of the attack in reporting the incident).

 

Counterspeech training examples:

  • Tactical Tech’s Gendersec Training Curricula on “Hacking Hate Speech” – a training workshop curriculum on how to set up an online support network, create textual and visual counterspeech content, and deploy a counterspeech campaign
  • PEN America’s Online Harassment Field Manual – a training guide for journalists and writers on how to respond to online harassment and abuse, including building a community of supporters and developing counterspeech messages; includes guidance for employers on how to support staff experiencing online harassment, including through counterspeech

Online communities of supporters and counterspeech programming examples:

  • Hollaback!’s HeartMob project – an online platform that has an at-the-ready network of supporters to respond to users’ reports of online harassment and provide positive counterspeech (among other supportive services)
  • TrollBusters – an at-the-ready network of supporters to respond to women journalists’ reports of online harassment by providing positive counterspeech; includes monitoring the targets’ social media accounts for continued attacks and to send continued counter-messaging (among other supportive services)

Funders and implementers should consider providing support to scale up interventions like those referenced above for building communities of supporters and crafting and deploying effective counterspeech campaigns, including supporting the integration of these civil society interventions into technology platforms.

Strengthen protective factors and build resilience of individuals and communities

Because gendered disinformation is born of gender inequality and discriminatory norms, deterring its creation, dissemination, and amplification in the digital information environment will require donors and implementers to think beyond the perceived scope of counter-disinformation programming.  As noted previously, programming to strengthen the protective factors and build the resilience of individuals, communities, and societies against gendered disinformation may not look like programming that donors and implementers typically think of as counter-disinformation interventions.  This programming should not be limited to interventions to build the  resilience of individual women, girls, and people with diverse sexual orientations and gender identities (although this is one important type of response), but should also include gender-transformative interventions which aim to strengthen the resilience and protection of whole communities and societies against both perpetration and consumption of gendered disinformation.  

Programming to strengthen individuals’, communities’, and societies’ protective factors against the threat of gendered disinformation (and disinformation more broadly), includes interventions spanning development sectors, such as programming to: 

  • promote gender equity and gender justice
  • transform discriminatory and patriarchal gender norms
  • strengthen social cohesion
  • increase democratic participation and inclusion
  • improve equitable access to quality education
  • increase economic stability and improve economic opportunities
  • build media and information literacy 
  • strengthen critical thinking, analytical, and research skills 
  • provide social support and confidence-building opportunities 

Some who work at the intersection of technology, disinformation, and gender will caution that a focus on interventions such as media and information literacy, critical thinking skills, and confidence-building inappropriately places the responsibility of withstanding disinformation and its effects on individuals who are being adversely affected by it, rather than on the technology sector and policymakers to identify and institute effective solutions.  The onus of responding to and preventing gendered disinformation should not fall on the shoulders of subjects of gendered digital attacks, nor on those targeted or manipulated as consumers of false or problematic content. However, in order to stamp out the problem of disinformation, gender-sensitive counter-disinformation efforts must include thinking holistically about building resilience to disinformation and designing programming to strengthen the resilience not only of individuals, but also of communities and whole societies.  Regionally or nationally implemented media and information literacy curricula, for example, does not place the responsibility on individual students to learn to withstand gendered disinformation, but rather works toward inoculating entire communities against information integrity challenges.

Donors and implementers should work to integrate gender-sensitive counter-disinformation programming across development sectors, building these interventions into programming focused on longer-term social and behavior change to build the resilience of individuals, communities, and societies to withstand the evolving problem of disinformation.