3. Policy Recommendations

Updated On
Mar 31, 2021

Policy Recommendations

 

  • When implementing these programmatic approaches, consider political incentives in addition to technical solutions. 
  • Consider an inclusive, gender-sensitive landscape analysis or a political economy analysis to identify how the structure of social cleavages creates incentives and opportunities for candidates or political parties to exploit context-specific norms and stereotypes around gender identity, ethnic or religious identities, sexual orientation, and groups that have been historically marginalized within that context. 
  • Programmatic interventions should account for diverging interests within parties – parties are composed of functionaries, elected officials, interest groups, formal members, supporters, and voters – each of which may have unique incentives to propagate or take advantage of disinformation. 
  • The collective action problem of disinformation makes one-off interactions with single partners difficult – consider implementing technical programs with regular, ongoing interaction between all relevant parties to increase confidence that competitors are not “cheating.”
  • Relatedly, use convening power of donors or implementing organizations to bring relevant actors to the table. 
  • Consider pacts or pledges, especially in pre-election periods, in which all major parties commit to mitigating disinformation. Importantly, the agreement itself is cheap talk, but pay careful attention to design of institutions, both within the pact and externally, to monitoring compliance.
  • There is limited evidence for effectiveness of common counter-disinformation program approaches with a focus on political parties and political competition, including media literacy, fact-checking, and content labeling. That there is limited evidence does not necessarily imply these programs do not work, only that DRG funders and implementing partners should invest in the rigorous evaluation of these programs to determine their impact on key outcomes like political knowledge, attitudes and believes, polarization, propensity to engage in hate speech or harassment, and political behavior like voting, and to identify what design elements distinguish effective programs from ineffective ones. 
  • DRG program responses have tended to lag political parties’ use of sophisticated technologies like data harvesting, microtargeting, deep fakes and AI generated content. Funders and implementing partners should consider the use of innovation funds to generate concepts for responses to mitigate the potentially harmful effects of these tools, and to rigorously evaluate impact.