Written by Victoria Scott, Senior Research Officer at the International Foundation for Electoral Systems Center for Applied Research and Learning
Around the world, women and people who challenge traditional gender roles by speaking out in male-dominated spaces—such as political leaders, celebrities, activists, election officials, journalists, or individuals otherwise in the public eye—are regularly subjected to biased media reporting, the spread of false or problematic content about them, and targeted character assaults, harassment, abuse, and threats. Any woman, girl, or person who does not conform to gender norms and who engages in public and digital spaces is at risk, although the public may be most familiar with this behavior directed toward women leaders. Women who hold or seek positions of public leadership often find themselves facing criticism that has little to do with their ability or experience—like the criticism typically encountered by men in those same positions—and instead face gendered commentary on their character, morality, appearance, and conformity (or lack thereof) to traditional gender roles and norms. Their representation in the public information space is often defined by sexist tropes, stereotypes, and sexualized content. While not a new challenge, this phenomenon is increasingly pervasive and has been fueled by technology. Although this type of online malice is often directed at women and lesbian, gay, bisexual, transgender and intersex (LGBTI) individuals in the public eye, any person who deviates from gender norms risks being exposed to this type of abuse.
For donors and implementers, understanding the intersection of gender and disinformation is imperative to designing and delivering comprehensive and effective programming to counter disinformation and hate speech and promote information integrity. Without considering the different ways in which women, girls, men, boys, and people with diverse sexual orientations and gender identities engage in the digital information environment and experience and interpret disinformation, donor and implementer efforts to counter disinformation will not reach the individuals who are among the most marginalized in their communities. The impact and sustainability of these interventions will therefore remain limited. Analyzing disinformation through a gender lens is imperative to designing and implementing counter-disinformation programs in a way that both recognizes and challenges gender inequalities and power relations and transforms gender roles, norms, and stereotypes. This approach is necessary if donors, implementers, and researchers hope to effectively mitigate the threat of disinformation.
An increasing body of research and analysis explores the role of gender in disinformation campaigns, including the gendered impacts of disinformation on individuals, communities, and democracies. While this research presents a compelling case for funders and implementers to view information integrity and counter-disinformation programming through a gender lens, current programming is often limited to interventions to prevent or respond to online gender-based violence or to strengthen women’s and girls’ digital or media and information literacy. These are important approaches to strengthening the integrity of online spaces and responding to the information disorder, but a greater range of programming is both possible and necessary.
Highlight
Distinguishing online gender-based violence and gendered disinformation:
Gendered disinformation and online gender-based violence are concepts that are often conflated. According to the framing used throughout this guidebook, online gender-based violence can be considered a type of gendered disinformation (using gender to target the subjects of attack in false or problematic content), but gendered disinformation is broader than what online gender-based violence encompasses. Gendered disinformation reaches beyond gendered attacks carried out online to include harmful messaging that exploits gender inequalities, promotes heteronormativity, and deepens social cleavages. One reason for the frequent conflation of these terms may be that discussions of gender and disinformation typically rely on examples of gendered disinformation that are also examples of online gender-based violence. For instance, a common example is fake sexualized content (like sexualized deepfakes and photoshopped images or edited videos placing a specific woman’s face onto sexualized content). This example can be considered both online gender-based violence and gendered disinformation. However, there are also examples of gendered disinformation messages that are not necessarily categorized as online gender-based violence, for example sensationalized and hyper-partisan junk news stories designed to deepen existing ideological divisions and erode social cohesion.1 These two phenomena intersect, and both threaten the integrity of the information environment and full and equal participation in political, civic, and public spheres. It is important for counter-disinformation programming to not only prevent and respond to these direct attacks of harassment and abuse considered under the label of online gender-based violence, but also to prevent and respond to influence operations that exploit gender inequalities and norms in their messaging.
1There are differing definitions of the term "gendered disinformation," and a variety of perspectives on what constitutes gendered disinformation and whether or how it is distinct from online gender-based violence, abuse, or harassment. See e.g. review of existing definitions and distinctions in Jankowicz et al.’s Malign Creativity: How Gender, Sex, and Lies are Weaponized Against Women Online. As scholars and practitioners continue to develop their thinking in this emerging field, these definitions and perspectives continue to evolve.
Explore further:
This section of the guidebook is intended to be a resource to assist donors, implementers, and researchers to apply a gender lens when investigating and addressing information integrity and disinformation.It will also assist funders and practitioners in integrating gender throughout all aspects of counter-disinformation programming.
The section begins by briefly outlining why counter-disinformation programming must be viewed through a gender lens.
The section then defines the term “gendered disinformation” and the gender dimensions of disinformation in each of its component parts (actor, message, mode of dissemination, interpreter, and risk).
-
Explore the gender dimensions of disinformation
- Explore: Actors
- Explore: Messages
- Explore: Modes of Dissemination
- Explore: Interpreters
- Explore: Risks
The section closes with a look first at the current approaches to countering disinformation with gender dimensions and then at some promising new approaches for gender-sensitive counter-disinformation programming. While gender-sensitive programming and good practices are still emerging in the information integrity field, this section of the guidebook offers promising approaches based on known good practices in related fields.Specific examples of integrating gender into counter-disinformation interventions are also included throughout the guidebook’s thematic topics.