3. Research for Counter-Disinformation Program Implementation

Updated On
Apr 06, 2021

There are several research and measurement tools available to assist practitioners in monitoring of activities related to information and disinformation. At a basic level, these tools support program and monitoring, evaluation, and learning (MEL) staff in performing an accountability function. However, these research tools also play an important role in adapting programming to changing conditions. Beyond answering questions about whether and to what extent program activities are engaging their intended beneficiaries, these research tools can help practitioners identify how well activities or interventions are performing so that implementers can iterate, as in an adaptive management or Collaborating, Learning, and Adapting (CLA) framework. 

Program Monitoring 

(assess implementation, if content is reaching desired targets, if targets are engaging content)

Key Research Questions:

  • How many people are engaging in program activities or interventions?
  • What demographic, behavioral, or geographic groups are engaging in program activities? Is the intervention reaching its intended beneficiaries?
  • How are participants, beneficiaries, or audiences reacting to program activities or materials?
  • How does engagement or reaction vary across activity types?

Several tools are available to assist DRG practitioners in monitoring the reach of program activities and the degree to which audiences and intended beneficiaries are engaging program content. These tools differ according to the media through which information and disinformation, as well as counter-programming, are distributed. For analog media outlets like television and radio, audience metrics, including size, demographic composition, and geographic reach may be available through the outlets themselves or through state administrative records. The usefulness and detail of this information depends on the capacity of the outlets to collect this information and their willingness to share it publicly. Local marketing or advertising firms may also be good sources of audience information. In some cases, the reach of television and/or radio may be modeled using information on the broadcast infrastructure.

Digital platforms provide a more accessible suite of metrics. Social media platforms like Twitter, Facebook, and YouTube have built in analytical tools that allow even casual users to monitor post views  engagements (including “likes,” shares, and comments). Depending on the platform Application Programming Interface (API) and terms of service, more sophisticated analytical tools may be available. For example, Twitter’s API allows users to import large volumes of both metadata and tweet content, enabling users to monitor relationships between accounts and conduct content or sentiment analysis around specific topics. Google Analytics provides a suite of tools for measuring consumer engagement with advertising material, including behavior on destination websites. For example, these tools can help practitioners understand how audiences, having reached a resource or website by clicking on digital content (e.g. links embedded in tweets, Facebook posts, or YouTube video) are spending time on the destination resources and what resources they are viewing, downloading, or otherwise engaging. Tracking click-throughs provides potential measures of destination behavior, not just beliefs or attitudes. 

 

Workshopping Content: Pilot-Test-Scale 

Determining the content of programmatic activities is a key decision point in any program cycle. With respect to counter-disinformation programs, implementers should consider how the messenger, mode, and content of an intervention is likely to influence uptake and engagement by target groups with that content, and whether the material is likely to change beliefs or behavior. With this in mind, workshopping and testing counter-disinformation content throughout the implementation program phase can help implementers identify which programmatic approaches are working, as well as how and whether to adapt content in response to changing conditions. 

Key Research Questions:

  • What modes or messengers are most likely to increase content uptake in this context? For example, is one approach more effective than another in causing the interpreters to engage information and/or share it with others?
  • What framing of content is most likely to reduce consumption of disinformation, or increase consumption of true information in this context? For example, is a fact-checking message more likely to cause consumers to update their beliefs in the direction of truth, or does it cause retrenchment in belief in the original disinformation?

Several data collection methods allow DRG practitioners to workshop the content of interventions with small numbers of potential beneficiaries before scaling activities to larger audiences. Focus groups (scientifically sampled, structured, small group discussions) are used regularly both in market research and DRG programs to elicit in-depth reactions to test products. This format allows researchers to observe spontaneous reactions to prompts and probe respondents for more information, as opposed to surveys, which may be more broadly representative, but rely on respondents selecting uniform and predetermined response items that do not capture as much nuance. Focus groups are useful for collecting initial impressions about a range of alternatives for potential program content before scaling activities to a broader audience.

A/B tests are a more rigorous method for determining what variations in content or activities are most likely to achieve desired results, especially when alternatives are similar and differences between them are likely to be small. A/B tests are a form of randomized evaluation in which a researcher randomly assigns members of a pool of research participants to receive different versions of content. For example, product marketing emails or campaign fundraising solicitations might randomly assign a pool of email addresses to receive the same content under one of several varying email subjects. Researchers then measure differences between each of these experimental groups on the same outcomes, which for digital content often includes engagement rates, click-throughs, likes, shares, and/or comments.

 

Paragraphs

Highlight


Mode: The mechanisms through which programmatic content is delivered (e.g. in person, written materials, television, radio, social media, email, SMS, etc.)

Highlight


Because participants are randomly assigned to receive different variations, the researcher can confidently conclude any differences over these outcomes can be attributed to the content variation.

Social media platforms have used A/B testing to optimize platform responses to misinformation. In other cases, researchers or technology companies themselves  have experimented with variations of political content labels to determine whether these tags affect audience engagement. Similarly, DRG programs might use A/B testing to optimize digital content on disinformation programs to explore, for instance, how different framings or endorsers of fact-checking messages affect audience beliefs. 

 

Highlight


Tools Spotlight: Content and Message Testing Tools

Facebook: “A/B testing lets you change variables, such as your ad creative, audience, or placement to determine which strategy performs best and improve future campaigns. For example, you might hypothesize that a custom audience strategy will outperform an interest-based audience strategy for your business. An A/B test lets you quickly compare both strategies to see which one performs best.”

RIWI: “Respondents are randomly assigned to a treatment or control group to determine the impact of different concepts, videos, ads or phrases. All groups will see identical initial questions, followed by treatment group(s) receiving a developed message. After the treatment, all respondents will be asked questions to determine the resonance and engagement of the message or to measure behavioral changes (assessed post-treatment) between groups.”

GeoPoll: “GeoPoll works with leading global brands to test new concepts through video and picture surveys and mobile-based focus groups. Using GeoPoll’s research capabilities and large panel of respondents, brands can reach their target audience and gather much-needed data on what messaging is most effective, how new products should be marketed, how consumers will react to new products, and more.”

Mailchimp: “A/B testing campaigns test different versions of a single email to see how small changes can have an impact on your results. Choose what you want to test, like the subject line or content, and compare results to find out what works and what doesn't work for your audience.”

 

Dummy text