Rise of AI-driven misinformation in 2023: Navigating a new era of digital deception

By: soham shah&
December 18 2023

Share Article: facebook logo twitter logo linkedin logo
Rise of AI-driven misinformation in 2023: Navigating a new era of digital deception

Representative image of Artificial Intelligence (AI) and its impact on misinformation. (Source: Dall.E 3/Modified by Logically Facts)

In 1829, the British outlawed Sati in India, a Hindu ritual where a widow was immolated ‘either voluntarily or by force’ on her husband's funeral pyre. Over 150 years later, in 1987, Roop Kanwar, an 18-year-old widow, met a similar fate in Deorala, Rajasthan. Anand Patwardhan's documentary "Father, Son, and Holy War" highlights a significant factor perpetuating this practice: misinformation.

He presents an instance where a woman, defending Sati, brandished a doctored photo as evidence of divine intervention in Kanwar's immolation. Despite attempts to debunk it, the manipulated image was perceived as legitimate proof, illustrating the profound impact of visual misinformation, especially in less digitally literate communities.

unnamedThis edited image was used as proof of God’s ‘rays’ lighting the pyre during Roop Kanwar’s immolation. (Source: anandverite/YouTube)

The pervasive nature of visual perception and misinformation is strikingly evident. The significant harm caused by the crudely edited photo in 1987 underscores the increasingly alarming nature of visual misinformation today, especially considering the rapid evolution of technology. The year 2023, marked by the advent of advanced generative AI, has witnessed an unprecedented amplification in the scale of misinformation.

The evolution of AI-enabled misinformation in 2023

Generative AI tools, such as Dall-E and Midjourney, have dramatically escalated the potential for creating misleading imagery. These tools, capable of generating hundreds of deceptive images within minutes, pose a significant challenge in curbing misinformation on social media, and through WhatsApp forwards in rural areas.

And while generative AI tools mark a significant leap in the ability to create misleading imagery, the emergence of deepfakes represents the next frontier in digital deception.

Deepfakes: The new frontier of disinformation

India's recent tryst with deepfakes (using AI to make images of fake events or to replace one person's likeness convincingly with another) underscores this threat. Viral deepfakes of celebrities Rashmika Mandanna and Kajol, and political uses like during BJP's 2020 election campaign, where Manoj Tiwari’s speech was altered for linguistic versatility, exemplify this trend.

Screenshots of a social media post with a deepfake of Bollywood actor Kajol juxtaposed with the original image. (Source: Facebook/Instagram/Modified by Logically Facts)

The capabilities of generative AI extend far beyond efficient content creation for political campaigns, including email, text, and video communications. This technology also harbors the potential to deceive voters, impersonate political figures, and jeopardize the integrity of electoral processes. Such activities could occur on an unprecedented scale and with unparalleled rapidity, presenting new challenges in the realm of election security.

And fake news is not just restricted to video now, as AI-driven text misinformation is proliferating. Media rating company News Guard had identified over 603 'Unreliable AI-Generated News' sites across 15 languages (like iBusiness Day, Ireland Top News, and Daily Time Update) by December 2023. This reflects the growing sophistication of text-based misinformation.

The rise of generative AI

OpenAI's launch of ChatGPT in November 2022 marked a significant milestone in AI development, prompting the emergence of competing chatbots like Google's Bard, Microsoft's Bing, and X's Grok. These AI chatbots, employing Large Language Models (LLMs), synthesize responses from extensive datasets, exemplifying a leap in AI capabilities. Similarly, AI image generators, that use pre-existing datasets and diffusion models, have gained considerable attention.

AI tools add a new dimension to the well-established tactics for spreading misinformation. The use of bots on platforms like Twitter and Facebook to manipulate post engagement, and the controversial use of personal data for targeted propaganda, as evidenced by Cambridge Analytica, are now compounded by the ease and speed of AI-generated content. The creation of online echo chambers further fuels this kind of misinformation.

Anticipating future challenges

The versatility of AI in misinformation campaigns is alarming. In a September 2023 report, News Guard highlighted a network of 17 TikTok accounts that use realistic AI-generated audio to create conspiracy videos – that have garnered millions of views.

OpenAI researchers have themselves noted how ChatGPT can make disinformation campaigns much cheaper. This is because malicious actors now don’t have to spend time writing a misleading article; a chatbot can do it in seconds. Distinguishing AI-generated text from human writing is also extremely difficult, so much so that OpenAI was forced to roll back their AI-text detector ‘due to its low accuracy rate.’

The recent development in AI-driven image-to-video character animation, primarily exploited for creating non-consensual deepfake pornography that mainly targets women, further exacerbates this issue.


Professor Siwei Lyu, a leading deepfake expert and professor at the University at Buffalo, told Logically Facts that text-prompt video generation is an upcoming generative AI feature that can be a cause for concern in the misinformation space. Such technology will allow deepfakes videos to be created using text prompts – just like Midjourney's creation of an image using a text-based prompt. 

AI’s role in escalating health and political misinformation

The potential of AI to intensify health and political misinformation is significant. In countries like India, where health misinformation is rampant, AI-generated deepfakes of influential figures could have grave consequences. Health influencers have also been known to share misleading/questionable media as science, worsening the problem.

Moreover, AI's capacity for hyper-personalized and targeted political propaganda raises concerns about its role in shaping public opinion. Gianluca Demartini, associate professor of Data Science at the University of Queensland, told Logically Facts that the ability to personalize misinformation is the most threatening advancement in generative AI.

He said that by leveraging micro-targeting functionalities from social media, misinformation can be “personalized at scale for different types of users to persuade them (to vote for a party) in a more effective fashion.”

Professor Hany Farid, a leading digital forensic expert and professor at the University of California, told Wired in August 2023, “You could say something like, ‘Here’s a bunch of tweets from this user. Please write me something that will be engaging to them.’”

Ready availability and normalization of deepfakes can also give politicians a chance to claim that real videos of them – that damage their reputation – are manipulated. Take, for example, the recent denial by a Canadian politician, who was reportedly caught on video performing racist indigenous caricatures.

However, some experts like Arvind Narayanan, professor of computer science at Princeton University, and PhD scholar Sayash Kapoor, argue that generative AI only helps malicious actors reduce costs and does not arm them with new misinformation generation capabilities. For example, while malicious actors could always photoshop an image to mislead the public, generative AI tools allow them to do this within seconds now, and at scale.

Combating AI-driven misinformation: A look ahead

Combatting AI-generated content remains a formidable challenge. Attempts by various governments to intervene via legislation have also been inadequate so far, given the pace of evolution of the technology.

And while technological solutions like Intel's FakeCatcher and Google's SynthID show promise, their real-world effectiveness is yet to be proven. The inherent complexity of AI makes the development of foolproof detection tools an ongoing challenge.

Professor Farid also told CNN in June 2023 that problems with deepfakes that make them identifiable will resolve over time, making the videos difficult to distinguish from reality. While low-quality deepfakes are called ‘cheapfakes’ and can be identified by humans, higher-quality edits can fool even algorithms.

In an effort to address the proliferation of AI-driven misinformation, several companies in the generative AI sector have introduced measures and tools. Microsoft has declared its intent to embed metadata to identify generative AI content. Meta announced that it has mandated the disclosure of AI-generated content in political advertisements on its platforms. Additionally, Google has developed SynthID, a tool that discreetly integrates a digital watermark into an image's pixels.

However, experts are not confident in the ability of these tools to counter AI misinformation effectively, as ways to evade these digital watermarks exist. Soheil Feizi, associate professor of computer science at the University of Maryland, told The Verge, “This problem is theoretically impossible to be solved reliably.”

Professor Lyu noted that the accuracy of the most advanced AI detection tools ranges between 85-95 percent. However, he cautioned that their effectiveness, as tested on training datasets, might not fully translate to real-world scenarios. His team's creation, the Deepfake-o-meter, aims to better assess the real-world performance of these detection tools.

Similarly, Professor Demartini, who is working on a fake text detection model, remarked that while generative AI and detection tools are both improving, reliably identifying deepfakes remains a significant challenge. He emphasizes the importance of enhancing literacy skills in the general population and equipping them with the necessary tools to combat misinformation.

Demartini also said that generative AI could assist in fact-checking efforts by providing reliable source-based evidence to the public. “It is critical to understand that these tools are not meant to replace expert human fact-checkers but rather to support and empower them in the forensic process of identifying misinformation,” he said.

Use of AI in electoral manipulation

Another big concern is the impact of AI manipulation in the upcoming elections worldwide. Some recent examples highlight the scale of this problem already. Like in the U.S., an AI-generated image of an explosion at the Pentagon briefly destabilized the stock market in May 2023. Or, more recently, in Bangladesh, where a fake video of exiled BNP leader Tarique Rahman purportedly showed him asking his party to “keep quiet” about the war in Gaza to not displease the U.S. government.

Instances like the Republican party's AI-generated doomsday video attacking President Joe Biden and the Congress party's use of AI imagery in India to take a dig at BRS, the opposition party in Telangana, highlight the global scale of this issue.

A screenshot from the Republican party’s AI-generated attack video. (Source: GOP YouTube)

Misinformation thrives in societies with deep-seated biases and a readiness to believe falsehoods about opposing views, explains Joyojeet Pal, a University of Michigan professor. His point is underscored by the fact that the current U.S. House and Senate voting patterns indicate a level of political polarization surpassing that of Richard Nixon's era.

“In a campaign, one problem is getting enough foot soldiers who are adequately competent in crafting clever misinformation, but now you can get AI to assist people who would otherwise not make very clever and deceptive misinformation, which can have greater impact,” Pal added.

AI's growing role in spreading misinformation is a complex issue. On one hand, AI provides tools that can help us spot and correct false information. But on the other hand, there's a real risk that it can be used to create and spread misleading content.

Henry Ajder, a philosopher, AI expert, and the presenter of the BBC documentary "The Future Will Be Synthesised," recently shared his insights with Logically Facts. Ajder observed that AI's progression has surpassed his initial expectations. Reflecting on the advancement in realistic short video clips, he noted, "A year ago, I would have predicted a timeline of around 3-4 years for significant developments. However, now it seems feasible that we could witness the emergence of advanced photo-realistic video generation tools within the next year, albeit with a limited scope in their generative capabilities."

The only way to deal with this for now, experts say, is to improve our understanding of digital media, and have strong legislation in place.

(Edited by Nitish Rampal)

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before