Understanding the Gender Dimensions of Disinformation

1. Gender Considerations in Counter-Disinformation Programming

Updated On
Apr 01, 2021

The onus of responding to and preventing gendered disinformation should not fall on the shoulders of subjects of gendered digital attacks, nor on those targeted or manipulated as consumers of false or problematic content.

Donors and implementers might wonder what makes gendered disinformation unique and different from other types of disinformation, why it is important to analyze the digital information landscape and any form of disinformation (regardless of whether it is specifically gendered disinformation) from a gender perspective, or why it is necessary to design and implement counter-disinformation programming with gender-specific considerations.  Answers to these questions include:

  • Disinformation that uses traditional gender stereotypes, norms, and roles in its content plays to entrenched power structures and works to uphold heteronormative political systems that maintain the political domain as that of cisgender, heterosexual men.
  • The means of accessing and interacting with information on the internet and social media differs for women and girls compared with men and boys.
  • The experience of disinformation and its impact on women, girls, and people with diverse sexual orientations and gender identities differs from that of cisgender, heterosexual men and boys.
  • Disinformation campaigns may disproportionately affect women, girls, and people with diverse sexual orientations and gender identities, which is further compounded for people with multiple marginalized identities (such as race, religion, or disability).

In designing and funding counter-disinformation activities, donors and implementers should consider the variety of forms that gendered disinformation, and gendered impacts of disinformation more broadly, can take. Counter-disinformation efforts that holistically address gender as the subject of disinformation campaigns and address women and girls as consumers of disinformation provide for multidimensional interventions that are effective and sustainable.

1.1 What are the gender dimensions of disinformation?

The intersection of information integrity challenges and gender is complex and nuanced. It includes not only the ways gender is employed in deliberate disinformation campaigns, but also encompasses the ways in which gendered misinformation and hate speech circulate within an information environment and are often amplified by malign actors to exploit existing social cleavages for personal or political gain. This intersection of gender and information integrity challenges will be referred to as “gendered disinformation” throughout this section.

Gendered disinformation includes false, misleading, or harmful content that exploits gender inequalities or invokes gender stereotypes and norms, including to target specific individuals or groups; this description refers to the content of the message.  Beyond gendered content, however, other important dimensions of gendered disinformation include: who produces and spreads problematic content (actor); how and where problematic content is shared and amplified, and who has access to certain technologies and digital spaces (mode of dissemination); who is the audience that receives or consumes the problematic content (interpreter); and how the creation, spread, and consumption of problematic content affects women, girls, men, boys, and people with diverse sexual orientations and gender identities, as well as  the gendered impacts of this content on communities and societies (risk)1.  

By breaking down the gender dimensions of information integrity challenges into their component parts – actor, message, mode of dissemination, interpreters, and risk – we can better identify different intervention points where gender-sensitive programming can make an impact2. 

Below we illustrate the ways gender influences each of these five component parts of disinformation, hate speech, and viral misinformation.

Amplification of Viral Misinformation and Hate Speech

Graphic: The amplification of viral misinformation and hate speech through individual or coordinated      disinformation, IFES (2019)

1.2 Actor

As with other forms of disinformation, producers and sharers of messages of disinformation with explicit gendered impacts may be motivated by ideology or a broader intent to undermine social cohesion, limit political participation, incite violence, or sow mistrust in information and democracy for political or financial gain.  People who are susceptible to becoming perpetrators of gendered disinformation may be lone actors or coordinated actors, and they may be ideologues, members of extremist or fringe groups, or solely pursuing financial gain (such as individuals employed as trolls).  Extrapolating from the field of gender-based violence, some of the risk factors that may contribute to a person’s susceptibility to creating and spreading hate speech and disinformation that exploits gender could include: 

  • At the individual level: attitude and beliefs; education; income; employment; and social isolation 
  • At the community level: limited economic opportunities; low levels of education; and high rates of poverty or unemployment
  • At the societal level: toxic masculinity or expectations of male dominance, aggression, and power; heteronormative societal values; impunity for violence against women; and patriarchal institutions

Gender-transformative interventions that seek to promote gender equity and healthy masculinities, strengthen social support and promote relationship-building, and increase education and skills development could build protective factors against individuals becoming perpetrators of gendered hate speech and disinformation.  Similarly, interventions that seek to strengthen social and political cohesion, build economic and education opportunities in a community, and reform institutions, policies, and legal systems could contribute to these protective factors.  In addition to identifying interventions to prevent individuals from becoming perpetrators of disinformation, practitioners must also acknowledge the complex discussions around the merits of sanctioning actors for perpetrating disinformation and hate speech.  

It is worth noting that the present study did not identify any research or programming investigating women’s potential role as perpetrators of disinformation.  While it is widely known that the vast majority of perpetrators of online gender-based violence are men, researchers do not yet know enough about individuals who create and spread disinformation to understand whether, to what extent, or under what conditions women are prevalent actors.  When considering the motivations and risk factors of actors who perpetrate disinformation, it is important to first understand who those actors are.  This is an area which requires more research.

1.3 Message

Researchers and practitioners working at the intersection of gender and information integrity challenges have largely focused on the gender dimensions of disinformation messages. The creation, dissemination, and amplification of gendered content that is false, misleading, or harmful has been acknowledged and investigated more than other aspects of disinformation. The gendered content of disinformation campaigns typically includes messages that:

  • Directly attack women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” (as individuals or as groups)
  • Exploit gender roles and stereotypes, exacerbate gender norms and inequalities, promote heteronormativity, and generally increase social intolerance and deepen existing societal cleavages

There are myriad examples of disinformation in the form of direct attacks on women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” online. This can include sexist tropes, stereotypes, and sexualized content (e.g. sexualized deepfakes or non-consensual distribution of intimate images3).  Some of these cases—such as those targeting prominent political candidates and leaders, activists, or celebrities—are well-known, having garnered public attention and media coverage. 

But while some cases of these attacks targeting prominent figures may be well-known to the public, many more cases of such gendered attacks online take place in a way that is both highly public and surprisingly commonplace. In 2015, a report from the United Nations Broadband Commission for Digital Development’s Working Group on Gender indicated that 73 percent of women had been exposed to or experienced some form of online violence, and that 18 percent of women in the European Union had experienced a form of serious internet violence at ages as young as 15 years.  A 2017 Pew Research Center study conducted with a nationally representative sample of adults in the U.S. found that 21 percent of young women (aged 18 to 29 years) reported they had been sexually harassed online.   In a recently released 2020 State of the World’s Girls report, Plan International reported on the findings from a survey conducted with more than 14,000 girls and young women aged 15-25 across 22 countries. The survey found that 58 percent of girls reported experiencing some form of online harassment on social media, with 47 percent of those respondents reporting that they were threatened with physical or sexual violence.  The harassment they faced was attributed to simply being a girl or young woman who is online (and compounded by race, ethnicity, disability, or LGBTI identity), or backlash to their work and  content they post if they are activists or outspoken individuals, “especially in relation to perceived feminist or gender equality issues.”  These direct attacks are not typically talked about as unusual or surprising; rather, the risk of gendered attacks online is often considered a risk that women and girls should expect when choosing to engage in digital spaces, or—in the case of politically active women—“the cost” of doing politics.


The contours of the digital information environment are characterized in part by this type of abuse, and these experiences have largely come to be expected by women and girls and tolerated by society. Though much of the time this content goes unreported, when survivors or targets of these attacks have brought complaints to law enforcement, technology companies and social media platforms, or other authorities, their concerns often go unresolved. They are commonly told that the content does not meet the standard for criminal prosecution or the standard of abuse covered by a platform’s code of conduct, advised to censor themselves, to go offline (or, in the case of minors, to take away their daughters’ devices), or told that the threats are harmless.

Beyond developing and deploying direct gender-based attacks against individuals or groups, disinformation actors may exploit gender as fodder for additional content. Such content may exploit gender roles and stereotypes, exacerbate gender norms and inequalities, enforce heteronormativity, and generally increase social intolerance and deepen existing societal cleavages. Examples include content that glorifies hypermasculine behavior in political leaders, feminizes male political opponents, paints women as being ill-equipped to lead or hold public office on the basis of gender stereotypes and norms, engages in lesbian-baiting, conflates feminist and LGBTI rights and activism with attacks on “traditional” families, and displays polarizing instances (real or fabricated) of feminist and LGBTI activism or of anti-women and anti-LGBTI actions to stoke backlash or fear. This type of content can be more nuanced than direct attacks and therefore more resistant to programming interventions.

1.4 Mode of Dissemination

Although gendered hate speech, viral misinformation, and disinformation are not new or exclusively digital challenges, the tools of technology and social media have enabled broader reach and impact of disinformation and emboldened those lone individuals and foreign or domestic actors who craft and disseminate these messages. Layering onto the range of harmful content that already exists in the information environment, disinformation campaigns designed to build upon existing social cleavages and biases can deploy a range of deceptive techniques to amplify gendered hate speech to make these gender biases seem more widely held and prevalent than they are.

Gendered hate speech and misinformation can have immense reach and impact even in the absence of a coordinated disinformation campaign, as this content circulates in the digital information space through organic engagement.  While much of this content is generated and circulated in mainstream digital spaces, there is also a robust network of male-dominated virtual spaces, sometimes referred to collectively as the “manosphere,” where these harmful gendered messages can garner large bases of support before jumping to mainstream social media platforms.  The “manosphere” includes online blogs and message and image boards hosting a variety of anonymous misogynistic, racist, anti-Semitic, and extremist content creators and audiences (“men’s rights,” “involuntarily celibate,” and other misogynist communities intersect with the “alt-right” movement in these spaces)4.

Over time, the community of men who participate in these information spaces have developed effective strategies to keep these messages in circulation and to  facilitate their spread from anonymous digital forums with little moderation to mainstream (social and traditional) media.  Individuals who wish to disseminate these harmful messages have found ways to circumvent content moderation (such as using memes or other images, which are more difficult for content moderation mechanisms to detect) and have developed tactics to inject this content into the broader information environment and to deploy coordinated attacks against specific targets (individuals, organizations, or movements).

This is in part what makes gender an attractive tool for disinformation actors. The “manosphere” provides ready-made audiences who are ripe for manipulation and activation in the service of a broader influence operation, and these communities have a toolbox of effective tactics for disseminating and amplifying harmful content at the ready.  A known disinformation strategy includes the infiltration of existing affinity groups to gain group trust and seed group conversations with content intended to further a goal of the disinformation actor.  Should disinformation actors manipulate these anti-women communities, they may successfully turn the energies of the “manosphere” against a political opponent, cultivating a troll farm with community members willing to carry out their work for free.

1.5 Interpreters

Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters, or consumers or recipients, of disinformation is a tactic that can exacerbate existing societal cleavages – likely in ways that politically or financially benefit creators and disseminators of these messages. This can include targeting women and people with diverse sexual orientations and gender identities with disinformation designed to exclude them from public or political life (e.g., in South Africa, spreading false information that people wearing fake nails or nail polish cannot vote in an election).  In other cases, targeting these groups with disinformation may be part of a broader campaign to create polarizing debates and widen ideological gaps. For example, disinformation campaigns might inflame the views of feminists and supporters of women’s and LGBTI rights, as well as the views of those who are anti-feminist and who oppose women’s and LGBTI equality. Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters of disinformation may amplify or distort divergent views to undermine social cohesion.

1.6 Risk

The prevalence of technology and social media has brought new attention to the harms inflicted–especially on women–by information integrity challenges, including disinformation campaigns. Regardless of the individual motivations of the actors who create and disseminate gendered hate speech and disinformation, the gendered impacts of disinformation are typically the same:

  • Exclusion of women and people with diverse sexual orientations and gender identities from politics, leadership, and other prominent roles in the public sphere through their disempowerment, discrimination, and silencing; and
  • Reinforcement of harmful patriarchal and heteronormative institutional and cultural structures.

Harmful gendered content and messaging that seeks to deter women from entering political spaces and exploit social cleavages has become an expected, and in some cases accepted, part of the digital landscape. There are also implicitly gendered impacts of any form of disinformation campaign, as women may be the consumers or interpreters of any false and problematic content. Disinformation may also have a disproportionate effect on women and girls due to such factors as lower levels of educational attainment, media and information literacy, self-confidence, and social support networks, as well as fewer opportunities to participate in programming designed to build resilience against disinformation due to such factors as cultural norms and household and family care responsibilities. These are only a small sampling of the factors that likely cause women and girls to be disproportionately affected by disinformation, and result from broader gender inequalities such as unequal access to and control over resources, decision-making, leadership, and power. For this reason, effective counter-disinformation programming must address all aspects of the disinformation threat through designing and funding programming that is at minimum gender-sensitive, and ideally gender-transformative.

The gender dimensions of disinformation not only affect women and girls, but also people with diverse sexual orientations and gender identities, as well as people with other intersecting, marginalized identities. Due to limited relevant research and programming, there is minimal data available on this subject (a problem in and of itself), but members of the LGBTI population, as well as women and girls who have other marginalized identities, are targeted disproportionately by online harassment and abuse and likely also by disinformation campaigns. It is imperative to consider the differential impact of disinformation on women, girls, and people with diverse sexual orientations and gender identities depending on other aspects of their identity (such as race, religion, or disability). They may be targeted in different ways in the digital information space than individuals who do not share these marginalized identities and may suffer more significant consequences from disinformation campaigns.


The next two sections of the guide further explore two significant gendered impacts of disinformation:

  • Silencing women public figures and deterring women from seeking public roles
  • Undermining democracy and good governance, increasing political polarization, and expanding social cleavages

Silencing women public figures and deterring women from seeking public roles

As the internet and social media have increasingly become major sources of information and news consumption for people across the globe, women in politics are turning to these mediums to reach the public and share their own ideas and policies as an alternative to often biased media coverage. Many women—typically having limited access to funding, small networks, little name recognition, and less traditional political experience and ties than men in politics—note that their social media presence is integral to their careers and credit these platforms with giving them greater exposure to the public, as well as the ability to shape their narratives and engage directly with supporters and constituents. However, they also often find themselves the subjects of alarming amounts of gendered disinformation aimed at delegitimizing and discrediting them and discouraging their participation in politics.

According to research conducted by the Inter-Parliamentary Union with 55 women parliamentarians across 39 countries, 41.8 percent of research participants reported that they had seen “extremely humiliating or sexually charged images of [themselves] spread through social media.” Not only do such experiences discourage individual women politicians from continuing in politics or running for reelection (either for concerns over their safety and reputation or those of their families), but they also have a deleterious effect on the participation of women in politics across entire societies, as women are deterred from entering the political field by the treatment of women before them.

“Research has shown that social media attacks do indeed have a chilling effect, particularly on first-time female political candidates. Women frequently cite the ‘threat of widespread, rapid, public attacks on their personal dignity as a factor deterring them from entering politics.’”

--(Anti)Social Media: The Benefits and Pitfalls of Digital for Female Politicians, Atalanta

Although there has been a recent increase in research investigating women politicians’ experiences with gendered disinformation in the digital information space and social media5, this phenomenon is also experienced by women journalists, election officials, public figures, celebrities, activists, online gamers, and others. Women who are the subjects of disinformation, hate speech, and other forms of online attacks may be discriminated against, discredited, silenced, or pushed to engage in self-censorship.

What may be even more impactful is the pernicious effects of these disinformation campaigns on women and girls who witness these attacks on prominent women. Seeing how women public figures are attacked online, they are more likely to be discouraged and disempowered from entering the public sphere and from participating in political and civic life themselves. The subtext of these threats of harm, character assassinations, and other forms of discrediting and delegitimizing signals to women and girls that they do not belong in the public sphere, that politics, activism, and civic participation were not designed for them, and that they risk violence and harm upon entering these spaces.

Undermining democracy and good governance, increasing political polarization, and expanding social cleavages

“When women decide that the risk to themselves and their families is too great, their participation in politics suffers, as do the representative character of government and the democratic process as a whole.”

--Sexism, Harassment and Violence against Women Parliamentarians, IPU

“Women’s equal participation is a prerequisite for strong, participatory democracies and we now know that social media can be mobilized effectively to bring women closer to government – or push them out.”

--Lucina Di Meco, Gendered Disinformation, Fake News, and Women in Politics

Beyond its impacts on women, girls, and people with diverse sexual orientations and gender identities as individuals and communities, disinformation campaigns that use patriarchal gender stereotypes or norms, use women as targets in its content, or target women as consumers undermine democracy and good governance. As scholar and political scientist Lucina Di Meco notes, inclusion and equal, meaningful participation are prerequisites for strong democracies. When disinformation campaigns hamper that equal participation, elections and democracies suffer.

Disinformation campaigns can use gender dimensions to increase political polarization and expand social cleavages simply by reinforcing existing gender stereotypes, magnifying divisive debates, amplifying fringe social and political ideologies and theories, and upholding existing power dynamics by discouraging the participation of women and people with diverse sexual orientations and gender identities.  These actions serve to exclude members of marginalized communities from political processes and democratic institutions, and in so doing, chip away at their meaningful participation in their democracies and representation in their institutions. Because the voice and participation of citizens are essential to building sustainable democratic societies, silencing the voices of women, girls, and people with diverse sexual orientations and gender identities weakens democracies, making gendered disinformation not just a “women’s issue” and tackling it not just the mandate of “inclusion programming,” but imperative to counter-disinformation programming and efforts to strengthen democracy, human rights, and governance around the globe. A plurality of experiences and points of view must be reflected in the way societies are governed in order to ensure “participatory, representative, and inclusive political processes and government institutions.”

Footnotes

1.This framing is adapted from ideas in Claire Wardle’s Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making, as referenced in IFES’ Disinformation Campaigns and Hate Speech: Exploring the Relationship and Programming Interventions..

2. IFES has developed a conceptual “chain of harm” to illustrate the ways in which disinformation, hate speech, and viral misinformation progress from the actors generating this content to the harms that manifest.  The goal of counter-disinformation programming is to disrupt the chain of harm at one or multiple points.  As such, it is critical to understand the gender dimensions of each component in order to develop successful, gender-sensitive interventions.  For more information on the chain of harm, see Disinformation Campaigns and Hate Speech: Exploring the Relationship and Programming Interventions.

3. The non-consensual distribution of intimate images is sometimes referred to as “revenge porn.”

4. See also: How the alt-right’s sexism lures men into white supremacy - Vox; When Women are the Enemy: The Intersection of Misogyny and White Supremacy - ADL; Misogynist Incels and Male Supremacism (newamerica.org)

5. See, e.g. #SHEPERSISTED (she-persisted.org); Engendering Hate: The contours of state-aligned gendered disinformation online - Demos; Toxic Twitter - A Toxic Place for Women | Amnesty International; Gender, Politics and Disinformation on Social Media - Center for Democracy and Technology (cdt.org); CDT Research Workshop: First Steps in Developing a Research Agenda to Address Disinformation, Race, and Gender - Center for Democracy and Technology; Tweets That Chill: Analyzing Online Violence Against Women in Politics | National Democratic Institute (ndi.org); issuesbrief-e.pdf (ipu.org); GLOBAL GAIN UN EVENT (disinfo.eu); among others