How do audio deepfakes add to the cacophony of online misinformation?

How do audio deepfakes add to the cacophony of online misinformation?

By: ishaana aiyanna&
January 5 2024

Share Article: facebook logo twitter logo linkedin logo
How do audio deepfakes add to the cacophony of online misinformation?

Representative image created using an AI tool.

The internet had a field day with AI-generated audio-visuals of Indian Prime Minister Narendra Modi with Italian counterpart Giorgia Meloni after the duo met during the G20 summit in New Delhi in September 2023 and then during the COP28 conference in Dubai in December 2023. These videos, mostly overlaid with Bollywood songs and dialogues in Modi’s voice, created using synthetic audio, have amassed millions of views on platforms like Instagram.

 

Satirical and entertainment value apart, synthetic audio is increasingly becoming a very effective means to spread falsehoods as it gets more convincing and harder to differentiate from real audio. 

The challenge that is AI-generated audio

With advancing AI (artificial intelligence) technology, experts are increasingly worried about real audio clips being dismissed as fake quoting the defense of plausible deniability because of a lack of detection methods and skills. 

For instance, two alleged audio clips of Dravida Munnetra Kazhagam (DMK) lawmaker Palanivel Thiaga Rajan surfaced in April 2023. In the clips, the former finance minister of the south Indian state of Tamil Nadu could purportedly be heard accusing his party members of corruption and praising the opposition Bharatiya Janata Party (BJP). 

While he dismissed the clips as “fabricated,” a report by the non-profit publication Rest of World noted that while experts were divided on the first clip, they agreed on the second one being authentic. 

“Compared to videos, it is much easier to make a realistic audio clone. Audio is also easily shared, particularly in messaging apps. These challenges of easy creation and easy distribution are compounded by the challenges in detection – the tools to detect synthetic audio are neither reliable across different ways to create a fake track or across different global languages, nor are journalists and fact-checkers trained with the skills to use these tools,” says Sam Gregory, executive director at the non-profit Witness which uses video and technology to defend human rights.


Misleading audiences: One fake audio at a time

While artificial intelligence has unquestionably revolutionized various aspects of our lives, its misuse is undeniable with deepfake audios posing a monumental challenge to the information ecosystem. The risks are all-pervasive with deepfake audio being used to perpetrate identity fraud, personal attacks, and spread electoral misinformation across the globe. 

There have been multiple instances of politicians being falsely represented via such media. In late September 2023, ahead of the general elections in Slovakia, a deepfake audio of the Progressive Slovakia party chairman Michal Šimečka speaking to a journalist describing a scheme to rig the elections gained huge popularity with Slovak social media users.

In another instance, a fake audio clip of United Kingdom’s Labour Party leader Keir Starmer purportedly abusing his staffer raked in millions of views. And in India, ahead of the state polls in states like Telangana and Madhya Pradesh, a slew of fake audio clips and videos were shared to mislead voters during the election campaigns. 

Speaking to Logically Facts, Indian political commentator and columnist Amitabh Tiwari said, “AI-generated audio adds to the problem of misinformation. Its impact is most likely to affect any target voter base depending on the sensitivity of the content.” 

Commenting on the accessibility of the deepfake audio tools, he added, “No single party is at a competitive advantage as this technology is easily available to everyone and it impacts mainly voters who are not digitally literate and cannot discern between real and fake.” 

A report by BoomLive, an Indian fact-checking website, highlighted how videos of prominent personalities overlaid with synthetic or machine-generated audio are also being used in India to create fake ads and clips to mislead people with get-rich-quick schemes, fake gaming and trading platforms, and even purported cures to ailments like diabetes.

Recently, an AI-voice scam cost an Indian man ₹1.58 lakh when he received a call emulating his brother-in-law’s voice, seeking immediate financial assistance for medical treatment. Such AI-powered frauds are seemingly getting popular with miscreants. 

In fact, according to a report released by computer security software company McAfee in May 2023, 47 percent of respondents in India said they had either been a victim of an AI-voice scam themselves (20 percent) or knew somebody else who had (27 percent). Globally, 10 percent of surveyed adults had been targeted personally, while 15 percent responded saying somebody they knew was a victim of an AI-voice fraud.

Why is detecting digitally created audio difficult?

Previously, voice cloning was easier to identify as it was not as sophisticated as it is today and needed comparatively more data sets for training the replication tools and models. Emphasizing the ease of accessibility, Gregory said, “AI-created and altered audio is surging in usage globally because it has become extremely accessible to make – easy to do on common tools available online for limited cost, and it does not require large amounts of a person's speech to develop a reasonable imitation.”

Gregory also pointed out the practical and technical challenges and said, “On a practical level, audio lacks the types of contextual clues that are available in images and videos to see if a scene looks consistent with other sources from the same event.”  

He added that deepfake audio clip detection is subject to challenges similar to other forms of media i.e. tools won’t work well with compressed audio with background noise, and algorithms aren’t often as effective on “widely used global languages outside of dominant Global North languages nor as effective on non-native speaker English.”

Is it possible to bell the cat?

In November 2023, recognizing the risk posed by synthetic media, Google announced adding an ‘About this result’ label to AI-generated media in Google Search. The company also stated that going forward, YouTube will require creators to disclose altered or synthetic content that is created using AI tools or face suspension. These disclosure requirements extend to election advertisement policies as well.

With a majority of nations across the world heading into elections in 2024, AI-generated audio could prove more problematic, by exacerbating the misinformation problem, creating a need for more effective ways to fact-check and detect digitally-generated content.  

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before