1. Gender Considerations in Counter-Disinformation Programming

Updated On
Apr 01, 2021
Paragraphs

The onus of responding to and preventing gendered disinformation should not fall on the shoulders of subjects of gendered digital attacks, nor on those targeted or manipulated as consumers of false or problematic content.

Donors and implementers might wonder what makes gendered disinformation unique and different from other types of disinformation, why it is important to analyze the digital information landscape and any form of disinformation (regardless of whether it is specifically gendered disinformation) from a gender perspective, or why it is necessary to design and implement counter-disinformation programming with gender-specific considerations.  Answers to these questions include:

  • Disinformation that uses traditional gender stereotypes, norms, and roles in its content plays to entrenched power structures and works to uphold heteronormative political systems that maintain the political domain as that of cisgender, heterosexual men.
  • The means of accessing and interacting with information on the internet and social media differs for women and girls compared with men and boys.
  • The experience of disinformation and its impact on women, girls, and people with diverse sexual orientations and gender identities differs from that of cisgender, heterosexual men and boys.
  • Disinformation campaigns may disproportionately affect women, girls, and people with diverse sexual orientations and gender identities, which is further compounded for people with multiple marginalized identities (such as race, religion, or disability).

In designing and funding counter-disinformation activities, donors and implementers should consider the variety of forms that gendered disinformation, and gendered impacts of disinformation more broadly, can take. Counter-disinformation efforts that holistically address gender as the subject of disinformation campaigns and address women and girls as consumers of disinformation provide for multidimensional interventions that are effective and sustainable.

1.1 What are the gender dimensions of disinformation?

The intersection of information integrity challenges and gender is complex and nuanced. It includes not only the ways gender is employed in deliberate disinformation campaigns, but also encompasses the ways in which gendered misinformation and hate speech circulate within an information environment and are often amplified by malign actors to exploit existing social cleavages for personal or political gain. This intersection of gender and information integrity challenges will be referred to as “gendered disinformation” throughout this section.

Gendered disinformation includes false, misleading, or harmful content that exploits gender inequalities or invokes gender stereotypes and norms, including to target specific individuals or groups; this description refers to the content of the message.  Beyond gendered content, however, other important dimensions of gendered disinformation include: who produces and spreads problematic content (actor); how and where problematic content is shared and amplified, and who has access to certain technologies and digital spaces (mode of dissemination); who is the audience that receives or consumes the problematic content (interpreter); and how the creation, spread, and consumption of problematic content affects women, girls, men, boys, and people with diverse sexual orientations and gender identities, as well as the gendered impacts of this content on communities and societies (risk)1.  

By breaking down the gender dimensions of information integrity challenges into their component parts – actor, message, mode of dissemination, interpreters, and risk – we can better identify different intervention points where gender-sensitive programming can make an impact2

Below we illustrate the ways gender influences each of these five component parts of disinformation, hate speech, and viral misinformation.

Amplification of Viral Misinformation and Hate Speech

Graphic: The amplification of viral misinformation and hate speech through individual or coordinated disinformation, IFES (2019)

A. Actor

As with other forms of disinformation, producers and sharers of messages of disinformation with explicit gendered impacts may be motivated by ideology or a broader intent to undermine social cohesion, limit political participation, incite violence, or sow mistrust in information and democracy for political or financial gain. People who are susceptible to becoming perpetrators of gendered disinformation may be lone actors or coordinated actors, and they may be ideologues, members of extremist or fringe groups, or solely pursuing financial gain (such as individuals employed as trolls). Extrapolating from the field of gender-based violence, some of the risk factors that may contribute to a person’s susceptibility to creating and spreading hate speech and disinformation that exploits gender could include: 

  • At the individual level: attitude and beliefs; education; income; employment; and social isolation 
  • At the community level: limited economic opportunities; low levels of education; and high rates of poverty or unemployment
  • At the societal level: toxic masculinity or expectations of male dominance, aggression, and power; heteronormative societal values; impunity for violence against women; and patriarchal institutions

Gender-transformative interventions that seek to promote gender equity and healthy masculinities, strengthen social support and promote relationship-building, and increase education and skills development could build protective factors against individuals becoming perpetrators of gendered hate speech and disinformation. Similarly, interventions that seek to strengthen social and political cohesion, build economic and education opportunities in a community, and reform institutions, policies, and legal systems could contribute to these protective factors.  In addition to identifying interventions to prevent individuals from becoming perpetrators of disinformation, practitioners must also acknowledge the complex discussions around the merits of sanctioning actors for perpetrating disinformation and hate speech.  

It is worth noting that the present study did not identify any research or programming investigating women’s potential role as perpetrators of disinformation.  While it is widely known that the vast majority of perpetrators of online gender-based violence are men, researchers do not yet know enough about individuals who create and spread disinformation to understand whether, to what extent, or under what conditions women are prevalent actors.  When considering the motivations and risk factors of actors who perpetrate disinformation, it is important to first understand who those actors are.  This is an area that requires more research.

B. Message

Researchers and practitioners working at the intersection of gender and information integrity challenges have largely focused on the gender dimensions of disinformation messages. The creation, dissemination, and amplification of gendered content that is false, misleading, or harmful has been acknowledged and investigated more than other aspects of disinformation. The gendered content of disinformation campaigns typically includes messages that:

  • Directly attack women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” (as individuals or as groups)
  • Exploit gender roles and stereotypes, exacerbate gender norms and inequalities, promote heteronormativity, and generally increase social intolerance and deepen existing societal cleavages

There are myriad examples of disinformation in the form of direct attacks on women, people with diverse sexual orientations and gender identities, and men who do not conform to traditional norms of “masculinity” online. This can include sexist tropes, stereotypes, and sexualized content (e.g. sexualized deepfakes or non-consensual distribution of intimate images3).  Some of these cases—such as those targeting prominent political candidates and leaders, activists, or celebrities—are well-known, having garnered public attention and media coverage. 

Highlight


In 2016, leading up to the parliamentary elections in the Republic of Georgia, there was a disinformation campaign that targeted women politicians and a woman journalist in a video allegedly showing them engaged in sexual activity. The videos, which were shared online, included intimidating messages and threats that the targets of the attack should resign or additional videos allegedly featuring them would be released.

In another Georgian example, prominent journalist and activist, Tamara Chergoleishvili, was targeted in a fake video that allegedly showed her engaged in sexual activity with two other people. One of the people who appeared in the video with Chergoleishvili is a man who was labelled as “gay” and suffered consequences resulting from homophobic sentiments in Georgia.

Examples such as these seem sensationalized and extraordinary, but many women in the public eye encounter shocking instances of attacks like those described above. Similar cases of sexualized distortion have emerged against women in politics globally. 

The potential impact of this type of gendered disinformation is to exclude and intimidate the targets, to discourage them from running for office, and to otherwise disempower and silence them.  Perpetrators can also use these attacks to encourage their targets to withdraw from politics or to participate in ways that are directed by fear; to shift popular support away from politically-active women, undermining a significant leadership demographic, manipulating political outcomes, and weakening democracy; and to influence how voters view particular parties, policies, or entire political orders. Such attacks can also be used for gender policing (checking women and men who may be violating the gendered norms and stereotypes that govern their society). 


Sources: Coda Story, BBC, Radio Free Europe/Radio Liberty    

But while some cases of these attacks targeting prominent figures may be well-known to the public, many more cases of such gendered attacks online take place in a way that is both highly public and surprisingly commonplace. In 2015, a report from the United Nations Broadband Commission for Digital Development’s Working Group on Gender indicated that 73 percent of women had been exposed to or experienced some form of online violence, and that 18 percent of women in the European Union had experienced a form of serious internet violence at ages as young as 15 years.  A 2017 Pew Research Center study conducted with a nationally representative sample of adults in the U.S. found that 21 percent of young women (aged 18 to 29 years) reported they had been sexually harassed online.   In a recently released 2020 State of the World’s Girls report, Plan International reported on the findings from a survey conducted with more than 14,000 girls and young women aged 15-25 across 22 countries. The survey found that 58 percent of girls reported experiencing some form of online harassment on social media, with 47 percent of those respondents reporting that they were threatened with physical or sexual violence.  The harassment they faced was attributed to simply being a girl or young woman who is online (and compounded by race, ethnicity, disability, or LGBTI identity), or backlash to their work and  content they post if they are activists or outspoken individuals, “especially in relation to perceived feminist or gender equality issues.”  These direct attacks are not typically talked about as unusual or surprising; rather, the risk of gendered attacks online is often considered a risk that women and girls should expect when choosing to engage in digital spaces, or—in the case of politically active women—“the cost” of doing politics.


The contours of the digital information environment are characterized in part by this type of abuse, and these experiences have largely come to be expected by women and girls and tolerated by society. Though much of the time this content goes unreported, when survivors or targets of these attacks have brought complaints to law enforcement, technology companies and social media platforms, or other authorities, their concerns often go unresolved. They are commonly told that the content does not meet the standard for criminal prosecution or the standard of abuse covered by a platform’s code of conduct, advised to censor themselves, to go offline (or, in the case of minors, to take away their daughters’ devices), or told that the threats are harmless.

Highlight


Because of the ways that identity can be weaponized online, and the intersectional nature of gendered abuse, women, girls, and people with diverse sexual orientations and gender identities who also have other marginalized identities (such as race, religion, or disability) experience this abuse at higher rates and in different ways.

Beyond developing and deploying direct gender-based attacks against individuals or groups, disinformation actors may exploit gender as fodder for additional content. Such content may exploit gender roles and stereotypes, exacerbate gender norms and inequalities, enforce heteronormativity, and generally increase social intolerance and deepen existing societal cleavages. Examples include content that glorifies hypermasculine behavior in political leaders, feminizes male political opponents, paints women as being ill-equipped to lead or hold public office on the basis of gender stereotypes and norms, engages in lesbian-baiting, conflates feminist and LGBTI rights and activism with attacks on “traditional” families, and displays polarizing instances (real or fabricated) of feminist and LGBTI activism or of anti-women and anti-LGBTI actions to stoke backlash or fear. This type of content can be more nuanced than direct attacks and therefore more resistant to programming interventions.

C. Mode of Dissemination

Although gendered hate speech, viral misinformation, and disinformation are not new or exclusively digital challenges, the tools of technology and social media have enabled broader reach and impact of disinformation and emboldened those lone individuals and foreign or domestic actors who craft and disseminate these messages. Layering onto the range of harmful content that already exists in the information environment, disinformation campaigns designed to build upon existing social cleavages and biases can deploy a range of deceptive techniques to amplify gendered hate speech to make these gender biases seem more widely held and prevalent than they are.

Gendered hate speech and misinformation can have immense reach and impact even in the absence of a coordinated disinformation campaign, as this content circulates in the digital information space through organic engagement.  While much of this content is generated and circulated in mainstream digital spaces, there is also a robust network of male-dominated virtual spaces, sometimes referred to collectively as the “manosphere,” where these harmful gendered messages can garner large bases of support before jumping to mainstream social media platforms.  The “manosphere” includes online blogs and message and image boards hosting a variety of anonymous misogynistic, racist, anti-Semitic, and extremist content creators and audiences (“men’s rights,” “involuntarily celibate,” and other misogynist communities intersect with the “alt-right” movement in these spaces)4.

Over time, the community of men who participate in these information spaces have developed effective strategies to keep these messages in circulation and to facilitate their spread from anonymous digital forums with little moderation to mainstream (social and traditional) media. Individuals who wish to disseminate these harmful messages have found ways to circumvent content moderation (such as using memes or other images, which are more difficult for content moderation mechanisms to detect) and have developed tactics to inject this content into the broader information environment and to deploy coordinated attacks against specific targets (individuals, organizations, or movements).

This is in part what makes gender an attractive tool for disinformation actors. The “manosphere” provides ready-made audiences who are ripe for manipulation and activation in the service of a broader influence operation, and these communities have a toolbox of effective tactics for disseminating and amplifying harmful content at the ready.  A known disinformation strategy includes the infiltration of existing affinity groups to gain group trust and seed group conversations with content intended to further a goal of the disinformation actor. Should disinformation actors manipulate these anti-women communities, they may successfully turn the energies of the “manosphere” against a political opponent, cultivating a troll farm with community members willing to carry out their work for free.

Highlight


In November 2020, Facebook announced its takedown of a network of profiles, pages, and groups engaged in coordinated inauthentic behavior. The disinformation campaign, which originated in Iran and Afghanistan, targeted Afghans with a focus on women as consumers of the content shared. Almost half of the profiles on Facebook and more than half of the accounts on Instagram in the network were presented as women’s accounts. A number of pages in the network were billed as being for women. The women-oriented content shared across the network included a focus on content promoting women’s rights, as well as highlighting the Taliban’s treatment of women. The Stanford Internet Observatory’s analysis of the network indicated that additional content associated with the network was critical of the Taliban and noted that “[i]t is possible the intent [of the women-focused content] was to undermine the peace negotiations between the Afghan government and the Taliban; the Taliban is known for restricting women’s rights.”

The potential impact of gendered disinformation like this is to deepen societal divides and exploit ideological differences, compromising social cohesion and undermining political processes.

 

Source: Stanford Internet Observatory

 

D. Interpreters

Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters, or consumers or recipients, of disinformation is a tactic that can exacerbate existing societal cleavages – likely in ways that politically or financially benefit creators and disseminators of these messages. This can include targeting women and people with diverse sexual orientations and gender identities with disinformation designed to exclude them from public or political life (e.g., in South Africa, spreading false information that people wearing fake nails or nail polish cannot vote in an election). In other cases, targeting these groups with disinformation may be part of a broader campaign to create polarizing debates and widen ideological gaps. For example, disinformation campaigns might inflame the views of feminists and supporters of women’s and LGBTI rights, as well as the views of those who are anti-feminist and who oppose women’s and LGBTI equality. Disinformation that targets women and people with diverse sexual orientations and gender identities as interpreters of disinformation may amplify or distort divergent views to undermine social cohesion.

E. Risk

The prevalence of technology and social media has brought new attention to the harms inflicted–especially on women–by information integrity challenges, including disinformation campaigns. Regardless of the individual motivations of the actors who create and disseminate gendered hate speech and disinformation, the gendered impacts of disinformation are typically the same:

  • Exclusion of women and people with diverse sexual orientations and gender identities from politics, leadership, and other prominent roles in the public sphere through their disempowerment, discrimination, and silencing; and
  • Reinforcement of harmful patriarchal and heteronormative institutional and cultural structures.

Highlight


Gender-sensitive programming "attempt[s] to redress existing gender inequalities," while gender-transformative programming "attempt[s] to re-define women and men's gender roles and relations".

While gender-sensitive programming aims to "address gender norms, roles and access to resources in so far as needed to reach project goals," gender-transformative programming aims to "[transform] unequal gender relations to promote shared power, control of resources, decision-making, and support for women's empowerment.

 

Source: UN Women, Glossary of Gender-related terms and Concepts

Harmful gendered content and messaging that seeks to deter women from entering political spaces and exploit social cleavages has become an expected, and in some cases accepted, part of the digital landscape. There are also implicitly gendered impacts of any form of disinformation campaign, as women may be the consumers or interpreters of any false and problematic content. Disinformation may also have a disproportionate effect on women and girls due to such factors as lower levels of educational attainment, media and information literacy, self-confidence, and social support networks, as well as fewer opportunities to participate in programming designed to build resilience against disinformation due to such factors as cultural norms and household and family care responsibilities. These are only a small sampling of the factors that likely cause women and girls to be disproportionately affected by disinformation, and result from broader gender inequalities such as unequal access to and control over resources, decision-making, leadership, and power. For this reason, effective counter-disinformation programming must address all aspects of the disinformation threat through designing and funding programming that is at minimum gender-sensitive, and ideally gender-transformative.

The gender dimensions of disinformation not only affect women and girls, but also people with diverse sexual orientations and gender identities, as well as people with other intersecting, marginalized identities. Due to limited relevant research and programming, there is minimal data available on this subject (a problem in and of itself), but members of the LGBTI population, as well as women and girls who have other marginalized identities, are targeted disproportionately by online harassment and abuse and likely also by disinformation campaigns. It is imperative to consider the differential impact of disinformation on women, girls, and people with diverse sexual orientations and gender identities depending on other aspects of their identity (such as race, religion, or disability). They may be targeted in different ways in the digital information space than individuals who do not share these marginalized identities and may suffer more significant consequences from disinformation campaigns.

Footnotes

1This framing is adapted from ideas in Claire Wardle’s Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making, as referenced in IFES’ Disinformation Campaigns and Hate Speech: Exploring the Relationship and Programming Interventions..

2IFES has developed a conceptual “chain of harm” to illustrate the ways in which disinformation, hate speech, and viral misinformation progress from the actors generating this content to the harms that manifest.  The goal of counter-disinformation programming is to disrupt the chain of harm at one or multiple points.  As such, it is critical to understand the gender dimensions of each component in order to develop successful, gender-sensitive interventions.  For more information on the chain of harm, see Disinformation Campaigns and Hate Speech: Exploring the Relationship and Programming Interventions.

3The non-consensual distribution of intimate images is sometimes referred to as “revenge porn.”

4See also: How the alt-right’s sexism lures men into white supremacy - Vox; When Women are the Enemy: The Intersection of Misogyny and White Supremacy - ADL; Misogynist Incels and Male Supremacism (newamerica.org)