Written by Bret Barrowman, Senior Specialist for Research and Evaluation, Evidence and Learning Practice at the International Republican Institute
Effective democracy, human rights, and governance programming requires practitioners to accurately assess underlying causes of information disorders and to evaluate the effectiveness of interventions to treat them. Research serves these goals at several points in the DRG program cycle: problem and context analysis, targeting, design and content development, monitoring, adaptation, and evaluation.
Goals of Research
Applying research in the DRG program cycle supports programs by fulfilling the scientific goals of description, explanation, and prediction. Description identifies characteristics of research subjects and general patterns or relationships. Explanation identifies cause and effect relationships. Prediction forecasts what might happen in the future.
Research for Context Analysis and Design
Effective DRG programs to counter disinformation require the identification of a specific problem or set of problems in the information environment in a particular context. Key methods include landscape analysis, stakeholder analysis, political economy analysis, and the use of surveys or interviews to identify potential beneficiaries or particularly salient themes within a specific context.
Sample general research questions:
- What are the main drivers of disinformation in this context?
- What are the incentives for key actors to perpetuate or mitigate disinformation in this context?
- Through which medium is disinformation likely to have the greatest impact in this context?
- What evidence suggests our proposed activity(ies) will mitigate the problem?
- Which groups are the primary targets or consumers of disinformation in this context?
- Which key issues or social cleavages are most likely to be subjects of disinformation in this context?
There are several research and measurement approaches available for practitioners to monitor activities related to information and disinformation, both for program accountability functions and for adaptation to changing conditions. Key methods include digital and analog media audience metrics, measurement of knowledge, attitudes, or beliefs with surveys or focus groups, media engagement metrics, network analysis, and A/B tests. Key research questions include:
- How many people are engaging in program activities or interventions?
- What demographic, behavioral, or geographic groups are engaging in program activities? Is the intervention reaching its intended beneficiaries?
- How are participants, beneficiaries, or audiences reacting to program activities or materials? How do these reactions differ across subgroups, and specifically marginalized groups?
- Is one mode or message more effective than another in causing audience to engage information and/or share it with others? How does information uptake and sharing differ across subgroups? What are barriers to information or program uptake among marginalized groups?
- What framing of content is most likely to reduce consumption of disinformation, or increase consumption of reliable information? For example, is a fact-checking message more likely to cause consumers to update their beliefs in the direction of truth, or does it cause retrenchment in belief in the original disinformation? Does this effect vary across subgroups?
DRG program and impact evaluation can identify and describe key results, assess or improve the quality of program implementation, identify lessons that might improve the implementation of similar programs, or attribute changes in key outcome to a program intervention. Key methods include randomized evaluations and quasi- or non-experimental evaluations, including pre/post designs, difference-in-differences, statistical matching, comparative case studies, process tracing, and regression analysis. Key research questions include:
- Are there observable outcomes associated with the program?
- Does a program or activity cause a result of interest? For example, did a media literacy program increase the capacity of participants to distinguish between true news and false news? Does a program cause unintended outcomes?
- What is the size of the effect (i.e., impact) of an activity on an outcome of interest?
- What is the direction of the effect of an activity on an outcome of interest? For example, did a fact checking program decrease confidence in false news reports, or did it cause increased acceptance of those reports through backlash?
- Specific research questions should drive the selection of research designs and data collection methods. Committing to a specific design or data collection method will limit the questions the researcher is able to answer.
- Use a pilot-test-scale model for program activities or content. Using one or more of these research approaches, workshop interventions on small groups of respondents, and use pilot data to refine promising approaches before deploying to a larger set of beneficiaries.
- Protect personally identifiable information (PII). All the data collection methods described in this section can collect information characteristics, attitudes, beliefs, and willingness to engage in political action. Regardless of the method, researchers should make every attempt to secure informed consent to participate in research and should take care to secure and de-identify personal data.
- Consider partnerships with research organizations, university labs, or individual academic researchers, who may have a comparative advantage in designing and implementing complex research designs, and who may have an interest in studying the effects of counter-disinformation programs.