The Vaccine Safety Project
Strengthening information on vaccine safety on Wikipedia
The Vaccine Safety Project aims at documenting the existing knowledge as well as finding and reducing the knowledge gaps related to vaccine safety on Wikipedia. This project will involve research around resources related to vaccine safety within and outside of Wikipedia and collaborations with experts from Wikipedia and the Vaccine Safety Net. Read more here: https://en.wikipedia.org/wiki/Wikipedia:Vaccine_safety
(Copied from website)
The Trust Project
The Trust Project, a consortium of top news companies led by award-winning journalist Sally Lehrman, is developing transparency standards that help you easily assess the quality and credibility of journalism.
Over two years, Trust Project researchers interviewed people in the U.S. and Europe to find out what’s important to them when it comes to news.
The Social Media Tracking Centre
The first SMTC was deployed during Ghana’s 2012 elections. The idea is to identify electoral malpractices in real time, using such information to warn the relevant institutions. In Ghana, Penplusbytes passed relevant information on to the National Elections Security Task Force (NESTF) – and this body acted on it. SMTC teams monitor platforms like Twitter, Facebook et cetera. They use the Aggie social media tracking software to monitor, verify, and inform important stakeholders of misinformation.
(Copied from website)
The Social Media Analysis Toolkit (SMAT)
The conversations that trend on internet platforms shape our world in consequential ways, from who we vote for, to what news we read, to how we respond to a pandemic.
But frequently, these conversations don’t trend organically — they’re the result of influence campaigns intended to misinform, radicalize, or polarize. Rather than public opinion influencing the trends, social media trends influence public opinion.
The Santa Clara Principles On Transparency and Accountability in Content Moderation
The Santa Clara Principles On Transparency and Accountability in Content Moderation cover various aspects of content moderation, developed by legal scholars and technologists based mostly in the United States, targeting social media companies with large use bases.
The Fake News Detector
The Fake News Detector allows you to detect and flag Fake News, Click Baits and Extremely Biased news. After flagging a newstory, other people that use the Fake News Detector will be able to see your flagging, will pay more attention to it and may also flag it.
The Factual
The news experience is frustrating.
News outlets that were respected in the past are now increasingly partisan and biased. That’s led to thousands of new sources but it’s hard to know which ones to trust.
And when you finally find an article you want to read you’re often forced to buy a full subscription just to read that one article.
We love reading the news and the above frustrating experience is one we encounter daily.
The Computational Propaganda Project
Since 2012, they have been investigating the use of algorithms, automation, and computational propaganda in public life. Political bots are manipulating public opinion over major social networking applications. This project enables a new team of social and information scientists to investigate the impact of automated scripts, commonly called bots, on social media. They study both the bot scripts and the people making such bots, and then work with computer scientists to improve the ways they catch and stop such bots.
The (mis)informed citizen
Though concerns about the use of misinformation are certainly not new, the reach, speed and volume of misinformation in the digital era have generated a heightened sense of urgency among policymakers, scholars and the public alike. At the same time, relatively little is known about how such information is processed by people online – where much of it is encountered incidentally by citizens who are otherwise inattentive to public affairs – nor about its lasting effects on those who encounter it.
teyit.org
teyit.org is working to ensure that internet users can access correct information by verifying in many areas from widely known mistakes, suspicious information on the agenda of social media, claims brought up by the media, or urban legends.
teyit.org thus enables citizens and non-governmental organizations, who use the internet as their primary news source, to learn which information is correct and which is incorrect on online platforms. teyit.org aims to gain the habit of critical thinking and increase new media literacy.
(Copied from website)