Skip to content
Home » Publishers: How AI can be used to fight disinformation | What’s New in Publishing

Publishers: How AI can be used to fight disinformation | What’s New in Publishing

Generative AI might fuel the plague of disinformation. But there are ways in which AI can also be the solution.

Disinformation isn’t a new problem, but the internet has made its impact much more powerful. So far, social media has been deemed the primary culprit as more and more people are using social platforms for news consumption. The Eurobarometer report shows that 28% of EU citizens feel they have been exposed to fake news, and the issue is worse in countries with limited press freedom. 

The fast advancement in generative AI tools like ChatGPT for text and MidJourney for images threatens to make the problem much worse soon. Laurence Dierickx, a researcher on AI-driven journalism and fact-checking at the University of Bergen, says that “with the rise of generative AI and large language models, spreading misinformation at a large scale has never been so accessible, easy, and quick.” 

Some cases are relatively innocuous or at least easily disprovable, such as an AI-generated photo of the Pope in a puffer jacket or deepfakes showing Donald Trump’s arrest. But AI tools are also getting in the hands of malicious actors.  

Anja Bechmann, a professor at the Aarhus University who specialises in the role of AI algorithmic communicative spaces, told The Fix that “AI has been used to automatically produce and spread misinformation campaigns using various kinds of bot techniques”. This activity is labelled as rapid disinformation attacks, “attacks in which disinformation is unleashed quickly and broadly with the goal of creating an immediate disruptive effect”. 

Artificial intelligence, especially generative models that produce human-like text, can cheaply, quickly and substantially produce the content needed for such attacks. Bechmann adds that “AI has been used to create and amplify misinformation throughout the world including Europe. Good examples are how actors create misinformation using deepfakes AI tools e.g. manufactured pictures through DALL.E or MidJourney.”  

The task of manually fact-checking the sea of AI-generated nonsense is daunting and formidable. Fact-checkers often are overworked, suffer from online harassment and lack the tools to catch up with spreaders of disinformation. 

Is AI also the solution?

Just as AI is used to create disinformation, though, it can be used to detect and overcome the problem. One way AI is used to fight disinformation is through an end-to-end content detection model. In this method, statements are compared to be true or false against a pre-designed database. Based on this method, Meta, the parent company of Facebook, launched its Sphere tool last year.

Sphere is an AI tool that checks whether the claims and citations made in statements match the original work. For this, the AI tool has a database made up of the collective knowledge base of Wikipedia. The problem with this method is that if the database contains wrong information, the tool would also indicate wrong results. Dierickx says that “Another problem [with Sphere] is using Wikipedia as a referral, which is not considered reliable in journalism.”

Meta also has the Third-Party Fact-Checking program, where fact-checkers from the International Fact-Checking Network verify misinformation that plagues the company’s platforms like Facebook, WhatsApp and Instagram. Since it is a FB-funded initiative, only viral and trending statements are under scrutiny. This leaves broad sections of misinformation, particularly political and advertising, go unchecked. 

Bechmann says that “In the same way that AI is used to create and circulate misinformation, AI is also used to recognise misinformation. This is often right now done by detecting shared patterns in misinformation by using human-labelled misinformation and then training the AI models to perform effectively on both video, images and text.” 

Google started using ClaimReview, a tagging system that would identify fact-checked articles for search engines, applications and social media sites. It would provide not only the right information but also note the accuracy of it.

Tech companies alone can’t fight this battle. A wider collaboration is needed, especially with the news industry. A partnership that reflects this idea is Project Origin. This is a collaboration between Microsoft and media organisations like the BBC, CBC/ Radio Canada, The Telegraph, Media City Bergen and International Press Telecommunications Council (IPTC). 

This collaboration strives towards making sure that the flow of news from its creation, publication and consumption stays undisturbed. Participants of the initiative plan to achieve this by using multiple tools that would help recognise correct content from their metadata of transcoded files.  

Similarly, Agence France-Presse (AFP), the French National Centre for Scientific Research (CNRS), PlusValue, Sotrender, Science Feedback, Università Ca’ Foscari Venezia, and Re-Imagine Europa (RIE) have launched the “Narratives Observatory combatting Disinformation in Europe Systemically” project (NODES). This pilot project brands itself Europe’s first Narratives Observatory to fight digital disinformation. 

To meet their aim, NODES will use a large-scale mechanism to analyse and detect the journey of disinformation from its origin to the end consumer, focusing on traditional and digital media in four European languages (English, Spanish, French and Polish).

A great example of synergy between fact-checkers and news organisations can be found in Norway. Faktisk.no is a non-profit fact-checking organisation partly owned and funded by Norway’s six competing news organisations. Faktisk Innsikt is the AI department of Faktisk.no. This department plans to use databases and Natural Language Processing and Norwegian Language Models to help fight disinformation in the country.  

The news industry has taken big steps towards addressing the problem of disinformation, and so did many research institutions. One such organisation is Fake News Risk Mitigator (FERMI), a consortium of 17 institutions funded by the European Union. The organisation is working on producing 7 work packages that would target disinformation and fake news. 

Despite the efforts of private and public organisations, full automation for fact-checking with AI is still a distant reality. The current workflow behind debunking fake stories requires both AI and human effort. As reporters and media managers consider using AI in the newsroom, they should also leverage AI tools to fight disinformation.

Priyal Shah

This piece was originally published in The Fix


Leave a Reply

Your email address will not be published. Required fields are marked *

error: Content is protected !!