A project of the g0v civic technology community in Taiwan, CoFacts is a fact checking bot for messaging groups. Messages can be forwarded to the CoFacts bot for fact checking by a team of volunteers; the CoFacts bot can also be added to private groups, and will automatically share corrections if a fact-checked piece of false content is shared within the group.
(Adapted from website)
TrustServista uses advanced Artificial Intelligence algorithms in order to provide media professionals, analysts and content distributors with in-depth content analytics and verification capabilities.
TrustServista determines the trustworthiness of news articles using Artificial Intelligence. The trustworthiness algorithm combines deep content analysis, the publisher's profile, the sources it mentions or directly links to, and the different viewpoints of the same story, from other publishers.
MarvelousAI is an early stage startup founded by tech industry veterans. We are building an augmented analytics platform to provide actionable insights regarding online narratives. Our “Cyborg” intelligence methodology detects narratives and fine-grained emotional content, combining the power of human-in-the-loop learning with the latest in natural language processing, computational linguistics, and machine learning.
We are a joint team of engineers and investigators from CERTH-ITI and Deutsche Welle, trying to build a comprehensive tool for media verification on the Web. The Media Verification Assistant features a multitude of image tampering detection algorithms plus metadata analysis, GPS Geolocation, EXIF Thumbnail extraction and integration with reverse image search via Google.
The Hamilton 2.0 dashboard, a project of the Alliance for Securing Democracy at the German Marshall Fund of the United States, provides a summary analysis of the narratives and topics promoted by Russian, Chinese, and Iranian government officials and state-funded media on Twitter, YouTube, state-sponsored news websites, and via official diplomatic statements at the United Nations. (NOTE — there currently are no UN statements or YouTube data for Iran).
This project visualizes the Atlantic Council’s DFRLab research on coordinated disinformation campaigns. Coordinated disinformation campaigns are more likely to thrive when they go unnoticed and unchecked. This interactive visualizer breaks down the methods, targets, and origins of select coordinated disinformation campaigns throughout the world. There are significant efforts across the industry working to stop the effects of disinformation. These countermeasures take a wide range of forms.
The Digital Democracy Room is an initiative of FGV DAPP to monitor the public debate on the internet and fight disinformation strategies which threaten the integrity of political and electoral processes, seeking to strengthen the democratic institutions.
Monitoring the political debate of social networks in Brazil and now in three more countries in Latin America.
BotSlayer is an application that helps track and detect potential manipulation of information spreading on Twitter. The tool is developed by the Observatory on Social Media at Indiana University --- the same lab that brought to you Botometer and Hoaxy.
Botometer (formerly BotOrNot) checks the activity of a Twitter account and gives it a score based on how likely the account is to be a bot. Higher scores are more bot-like. Increasing evidence suggests that a growing amount of social media content is generated by autonomous entities known as social bots. Many social bots perform useful functions, but there is a growing record of malicious applications of social bots.
This project uses advanced machine learning techniques to detect propaganda accounts on Twitter.
(Copied from website)