4. Promising Approaches to Gender-Sensitive Counter-Disinformation Programming

Promising approaches to gender-sensitive counter-disinformation programming

Paragraphs

Establish institutional and organizational protocols

Several recent research studies7 investigating the prevalence and impact of online harassment and abuse of (women) journalists in the United States and around the world have found that many subjects of such attacks do not report these incidents to their employers or other authorities out of concern that nothing can or would be done in response, or for fear of personal or professional repercussions from reporting.  In cases where they do report these incidents to their employers, the organizations may not take action or may handle reports inconsistently and inadequately.  A key recommendation that surfaced from these findings is to establish institutional and organizational protocols, including specific policies and practices to support those attacked and to address reports of attacks.

Based on this research and work in the area of online gender-based violence, donors and implementers should support institutions and organizations such as political parties or campaigns, EMBs, news and media outlets, and activist or advocacy organizations to establish comprehensive institutional protocols to prevent attacks and respond to reports, including: 

  • Providing appropriate digital safety and security training and education about online harassment
  • Establishing clear and accessible reporting mechanisms that ensure the safety and protection of survivors of online violence and gendered disinformation, as well as their ability to freely participate in digital spaces
  • Ensuring systematic and consistent investigation of reports of attacks and referrals to appropriate authorities
  • Establishing a variety of responses that institutions will offer to support their staff or members who are subjects of attacks (e.g. screening and documenting threats, reporting to platforms and/or authorities, coordinating counter-messaging, and sharing guidance and providing support to staff or members who choose to block or confront the perpetrators of their attacks)
  • Providing appropriate resources and referrals following a report, such as physical security, psychological support, legal support, and personal information scrubbing services

In order to determine what protocols are needed, and to be responsive to the lived experiences of women and people with diverse sexual orientations and gender identities at work, programming should allow time and funding for institutions to survey their staff about their experiences and involve staff in decisions about the protocols, policies, and practices.

This approach can be adapted from the journalism and media industry to other organizations and institutions where gendered disinformation attacks are prevalent, installing policies and practices to ensure supportive, consistent, and effective responses to direct attacks.  This intervention can contribute to combatting the impunity of perpetrators of gendered disinformation attacks, as well as the silencing, self-censorship, and discouragement to participate in the political or public spheres by the subjects of these attacks.

 

Coordinate prevention, response, and risk mitigation strategies and establish appropriate case management and referral pathways

Gendered disinformation, much like gender-based violence, is a challenge which requires the involvement of stakeholders across multiple sectors and at multiple levels.  Prevention and response efforts to address gendered disinformation depend on cooperation between the public and private sectors, including technology firms and media outlets (especially social media and digital communications platforms), law enforcement and justice authorities, civil society, psychosocial and mental health providers, and other health providers in cases where technology-fueled disinformation efforts may result in physical harm.  Further, gendered disinformation risk mitigation efforts also depend on cooperation and information sharing between these stakeholders and international- and national-level policymakers (to inform legal and regulatory reform), civil society actors (to advocate for appropriate, effective, and sustainable interventions), the education sector (to inform curricula related to critical thinking and analytical skills, media and information literacy, digital safety), and the security sector in cases where incidents of gendered disinformation may be part of a coordinated campaign by malign foreign or domestic actors.

Donors and implementers should look to the robust experience of the humanitarian aid sector, specifically that of gender-based violence (GBV) prevention and response coordinators and service providers, to develop a coordinated approach to gender-sensitive disinformation interventions.  Specifically, funders and implementers can adapt and draw guidance from the Handbook for Coordinating Gender-based Violence Interventions in Emergencies and model national-level coordination networks and protocols on relevant elements of the approach detailed in this handbook to implement gender-sensitive responses to disinformation. 

Two important elements of a coordinated approach to GBV interventions in emergencies to carry over when adapting this approach are case management and the establishment and use of appropriate referral pathways.  Establishing appropriate case management in this scenario might entail: 1) the stakeholder who receives a complaint of gendered disinformation (for instance, a social media platform or local police) conducts a standard intake process with the person reporting; and 2) the stakeholder who receives the complaint or report uses an established referral pathway to refer the reporting party to a local civil society organization (for instance, local women’s organizations that are experienced GBV service providers) for case management and additional referrals as appropriate. Referring the reporting party to an established case manager that is trained to work with targets or survivors of gendered disinformation and networked with the other stakeholders can streamline supportive services for the reporting party by establishing one primary point of contact responsible for interfacing with them. The case manager organization would be responsible for communicating the various response and recourse options available, providing referrals to appropriate service providers in the referral network and referring cases to appropriate members of the coordination network for follow-up, and (in cases of a direct attack) providing support to the target or survivor of the attack.  

Establishing referral pathways in this scenario would involve identifying or establishing appropriate organizations or institutions responsible for different aspects of responding to reports of gendered disinformation, ensuring all coordination network organizations and institutions have access to the referral pathways, enabling them to receive initial reports of incidents and refer reporting parties to a local case manager organization, and case managers informing the reporting party about available services and avenues to pursue different interventions or recourse.  If the reporting party gives permission, the case manager should also connect them with relevant services in the referral pathway.

Donors should consider supporting: 

  • A mapping or sectoral analysis of relevant stakeholders 
  • A convening of practitioners and experts to discuss the gendered disinformation landscape and needs
  • Providing training and sensitization to law enforcement authorities, legal practitioners, and policymakers on gender, online and technology-facilitated gender-based violence,  and disinformation
  • The establishment of a coordination network that includes social media and digital communications platforms, law enforcement and justice authorities, civil society, psychosocial and mental health providers, and other health providers
  • The development of clear roles and responsibilities of network members, for example establishing case manager organizations with support from civil society and governments
  • The development of response protocols to guide the coordination, management, prevention, and response efforts of the network, including the development of a case management methodology and referral pathway

This intervention can contribute to the delivery of a holistic, survivor-centered approach to gender-sensitive counter-disinformation prevention and response programming, as well as combat impunity for perpetrators by institutionalizing a consistent and systematic approach of reporting claims to platforms and law enforcement authorities for investigation and recourse.

 

Build networks and communities of supporters and deploy counterspeech

“Don’t feed the trolls” is a common refrain of warning offered to those who find themselves the subjects of gendered disinformation.  Experts used to think the best way to counter direct attacks targeting someone due to their gender and exploiting gendered norms and stereotypes was to simply ignore the attacks.  Yet, recently, the dialogue around this issue has begun to evolve.  

While some still advise not to “feed the trolls”—in other words, to simply ignore or to block, report, and then ignore the harmful content hurled at and about them online—others who work with the subjects of these attacks, as well as those who have themselves been the subjects of such attacks, have begun to acknowledge the shortcomings of this approach. They point to the empowerment that subjects of gendered disinformation and those who witness it may derive from speaking up and calling out the attacks (or seeing others do so), and the need for outing misogyny when it rears its head in digital spaces.  Research conducted as part of the Name It. Change It. project also indicates that women politicians who directly respond to sexist attacks and call out the misogyny and harassment or abuse they face online (or when a third party does so on their behalf) are able to regain credibility with voters who they may have initially lost as a result of having been attacked.

It is important to clearly state that, while there are ongoing and evolving discussions on this topic about how best individuals can or ‘should’ respond to gendered disinformation, it is not the responsibility of those who find themselves the subjects of such attacks to respond in any one way, if at all, nor to prevent the occurrence or take steps to mitigate the risks of these attacks.  Those suffering gendered disinformation attacks should not be expected to shoulder the burden of solving this problem.  Rather, it is the responsibility of a variety of stakeholders—including the technology platforms, government institutions and regulatory bodies, political parties, media organizations, and civil society—to establish and implement effective approaches and mechanisms to prevent and respond to gendered disinformation, as well as to work to address its root causes and to mitigate its long-lasting and far-reaching impacts.  Nevertheless, best practice adapted from gender-based violence response programming indicates that when the subject of gendered disinformation reports an incident, they should be presented with information on the available options for response and recourse and any potential benefits and further risks associated with those options.

One such possible response to gendered disinformation is counterspeech, which the Dangerous Speech Project defines as “any direct response to hateful or harmful speech which seeks to undermine it,” also noting, “There are two types of counterspeech: organized counter-messaging campaigns and spontaneous, organic responses.” Individuals who have been targeted by harmful content online might choose to engage in counterspeech themselves, or they might choose to enlist the support of their own personal and professional community or an online network of supporters to craft and deploy counterspeech publicly on their behalf or privately with messages of support (for example via email or on a closed platform).  The effectiveness of counterspeech is difficult to measure, in part because those who engage in counterspeech may have different goals (ranging from changing the attitude of the perpetrator to limiting the reach of the harmful content to providing the subject of an attack with supportive messages). However, emerging research and anecdotal evidence indicates that crafting and deploying counterspeech (whether by the subjects of these attacks, their institutions or organizations, or a broader online community of supporters) is a promising practice in responding to gendered disinformation.8

A variety of positive outcomes to counterspeech have been referenced, including:

  • delivering a sense of empowerment back to the targets of gendered disinformation attacks, allowing them to take back their narrative
  • increasing the likelihood of positive, civil, or “pro-social” comments and/or decreasing the likelihood of negative, uncivil, or “anti-social” comments
  • drowning out harmful content with supportive counterspeech, both on public social media posts and in private communications
  • demonstrating to those sharing harmful content that their language or message is not accepted

 

Social media monitoring can play an important role in countering gendered disinformation, and can be linked to the coordination and deployment of counterspeech activities in response to gendered disinformation attacks.

Researchers, practitioners, and civil society actors are increasingly engaging in social media monitoring activities to inform their understanding of gendered disinformation, to identify entry points to disrupt gendered disinformation, viral misinformation, and hate speech, and to advocate for laws or regulations that are responsive to the growing challenges of online gender-based violence and the spread of harmful gendered content online. 

Social media monitoring in the context of gendered disinformation can be used to serve two primary functions: 

  • To listen to speech taking place across the digital information environment, monitor sentiment, and provide an important window into the creation, dissemination, and amplification of harmful content 
  • To monitor the adherence of political actors, media, and public institutions to legal and regulatory guidance and codes of conduct around disinformation and hate speech, and to monitor technology platforms’ enforcement of their community standards, terms of use, or codes of conduct

An early step donors, researchers, and implementors should take is to create methodologies and tools to monitor social media and collect data on gendered disinformation, hate speech, and viral misinformation. These should be adapted to local contexts and applied in research and programming in order to mount an effective effort to counter gendered disinformation. In 2019, CEPPS released a social media analysis tool to monitor online violence against women in elections. The tool includes a step-by-step guidance on how to identify trends and patterns of online violence, including: identifying the potential targets to monitor (i.e. women politicians, candidates, activists); defining the hate speech lexicon to monitor; choosing which social media platforms to monitor; selecting the research questions; running the analysis using data mining software; and then analyzing the results. A full description of the step-by-step process can be found in CEPPS' Violence Against Women in Elections Online: A Social Media Analysis Tool.

NDI has also developed a methodology for effectively scraping and analyzing such data in its reports "Tweets that Chill" and “Engendering Hate” with Demos through research in five countries. An essential step of the methodology is creating a lexicon in local languages of gender-based harassing language and the political language of the moment through workshops with local women’s rights organizations and civic technology organizations. 

Some of the key lessons from this research include:

  • Contextually- and linguistically-specific lexicons of online violence must be created and then evolve: “Across all case study countries, workshop participants highlighted the fluid and evolving nature of language and brainstormed ways to account for this nuance in the study methodology. For example, NDI learned from the Colombia workshop that violent language in Spanish varied across Latin America, with both Colombia-specific and words from other parts of the region being used within the country. In Indonesia, religious words or phrases were used, complicating and heightening the online violence by invoking religious messages at the same time. In Kenya, workshop participants noted that a number of violent words/phrases that were in common usage in spoken Swahili, had not yet made it into written text online on Twitter. These varied lessons point to the need for contextually- and linguistically-specific lexicons that can be continuously refreshed, modified, and implemented with human coders working alongside computer algorithms.” (excerpted from “Tweets that Chill”)

 

  • Attention to minority communities and intersecting identities is essential: “Online [violence against women in politics] is varied and contextual, as it differs from country to country and culture to culture. However, it is also the case that the expressions used and impacts of online violence can vary significantly between and among communities within the same country. For this reason, it is important to intentionally include and consider historically marginalized communities among women (e.g. women with disabilities, LGBTI women, and female members of religious and ethnic minorities) when exploring the phenomenon of online [violence against women in politics]. During the Colombia workshop, female representatives from the deaf community shared that the violence they faced was not in text, but through the uploading of violent GIFs and/or video clips in sign-language. It was explained that this delivery mechanism was particularly effective in conveying threat and insecurity because, for the majority of the members of the deaf community in Colombia, sign language is their first language, and the targeting was therefore unmistakable. Understanding that the kinds of threats and modes of online violence can differ substantially when targeting different marginalized communities indicates that further work is required to create relevant lexicons.” (excerpted from “Tweets that Chill”)

 

  • Center Local Expertise: “How gendered disinformation is framed and spreads across a network varies greatly according to context. Identifying or mitigating gendered disinformation cannot be successful without the central involvement and direction of local experts who understand the subtleties of how gendered disinformation may be expressed and where it is likely to arise and when. Platforms should support the work of local experts in identifying and combating gendered disinformation, for instance through the provision of data access or the trialing of potential responses through changes to platform design. Automated systems for identifying gendered disinformation are unlikely to have high levels of accuracy - though if employed, should be employed transparently and overseen by local experts.” (excerpted from “Engendering Hate”)

 

The Legal and Regulatory chapter section 6.2 on building capacity to monitor violations and the Election Monitoring chapter explore these concepts further.

Seemingly in response to what many perceive to be a lack of adequate interventions by policymakers and technology platforms to address the problem of gendered disinformation, a variety of NGOs, civil society, and advocacy organizations have designed interventions to train likely targets of these digital attacks (as well as their employers and allies and bystanders) to develop and implement an effective counterspeech campaign, while others have established online communities of supporters who are ready to support the targets of these attacks with counterspeech efforts (among other supportive services such as monitoring the digital space where the attack is taking place and assisting the target of the attack in reporting the incident).

 

Counterspeech training examples:

  • Tactical Tech’s Gendersec Training Curricula on “Hacking Hate Speech” – a training workshop curriculum on how to set up an online support network, create textual and visual counterspeech content, and deploy a counterspeech campaign
  • PEN America’s Online Harassment Field Manual – a training guide for journalists and writers on how to respond to online harassment and abuse, including building a community of supporters and developing counterspeech messages; includes guidance for employers on how to support staff experiencing online harassment, including through counterspeech

Online communities of supporters and counterspeech programming examples:

  • Hollaback!’s HeartMob project – an online platform that has an at-the-ready network of supporters to respond to users’ reports of online harassment and provide positive counterspeech (among other supportive services)
  • TrollBusters – an at-the-ready network of supporters to respond to women journalists’ reports of online harassment by providing positive counterspeech; includes monitoring the targets’ social media accounts for continued attacks and to send continued counter-messaging (among other supportive services)

Funders and implementers should consider providing support to scale up interventions like those referenced above for building communities of supporters and crafting and deploying effective counterspeech campaigns, including supporting the integration of these civil society interventions into technology platforms.

Strengthen protective factors and build resilience of individuals and communities

Because gendered disinformation is born of gender inequality and discriminatory norms, deterring its creation, dissemination, and amplification in the digital information environment will require donors and implementers to think beyond the perceived scope of counter-disinformation programming.  As noted previously, programming to strengthen the protective factors and build the resilience of individuals, communities, and societies against gendered disinformation may not look like programming that donors and implementers typically think of as counter-disinformation interventions.  This programming should not be limited to interventions to build the  resilience of individual women, girls, and people with diverse sexual orientations and gender identities (although this is one important type of response), but should also include gender-transformative interventions which aim to strengthen the resilience and protection of whole communities and societies against both perpetration and consumption of gendered disinformation.  

Programming to strengthen individuals’, communities’, and societies’ protective factors against the threat of gendered disinformation (and disinformation more broadly), includes interventions spanning development sectors, such as programming to: 

  • promote gender equity and gender justice
  • transform discriminatory and patriarchal gender norms
  • strengthen social cohesion
  • increase democratic participation and inclusion
  • improve equitable access to quality education
  • increase economic stability and improve economic opportunities
  • build media and information literacy 
  • strengthen critical thinking, analytical, and research skills 
  • provide social support and confidence-building opportunities 

Some who work at the intersection of technology, disinformation, and gender will caution that a focus on interventions such as media and information literacy, critical thinking skills, and confidence-building inappropriately places the responsibility of withstanding disinformation and its effects on individuals who are being adversely affected by it, rather than on the technology sector and policymakers to identify and institute effective solutions.  The onus of responding to and preventing gendered disinformation should not fall on the shoulders of subjects of gendered digital attacks, nor on those targeted or manipulated as consumers of false or problematic content. However, in order to stamp out the problem of disinformation, gender-sensitive counter-disinformation efforts must include thinking holistically about building resilience to disinformation and designing programming to strengthen the resilience not only of individuals, but also of communities and whole societies.  Regionally or nationally implemented media and information literacy curricula, for example, does not place the responsibility on individual students to learn to withstand gendered disinformation, but rather works toward inoculating entire communities against information integrity challenges.

Donors and implementers should work to integrate gender-sensitive counter-disinformation programming across development sectors, building these interventions into programming focused on longer-term social and behavior change to build the resilience of individuals, communities, and societies to withstand the evolving problem of disinformation.