The Donald, the Pope & the Unholy AI: Can we really detect AI image misinformation?

By: vakasha sachdev&
sam doak&
April 3 2023

Share Article: facebook logo twitter logo linkedin logo
The Donald, the Pope & the Unholy AI: Can we really detect AI image misinformation?

Source: Twitter, generated via Midjourney

Apart from the fact they live in places with excessive gold furnishings, what do Donald Trump and Pope Francis have in common? The last few years should have prepared us for anything, but it's still unlikely this question made it to our 2023 bingo sheets. Perhaps it did for you, in which case you are better suited than most to foresee the potential for misinformation and disinformation created by Artificial Intelligence (AI)-empowered tools.

Because that is what unites these unlikely bedfellows; recently, AI-generated art featuring Trump and the Pope has gone widely viral. But while this initially brought levity to social media timelines (Papal vestments by Balenciaga! A former U.S. President gurning during a cop chase!), journalists and fact-checkers soon realized the joke could actually be on them.

Take the image of Pope Francis in a puffy white luxury jacket, created using the AI image generator Midjourney by a man on psychedelic drugs. Yet, even digital culture journalists like Ryan Broderick thought it was real, let alone regular internet users. Varied images of Trump in advance of his anticipated arrest on March 21 were clearly intended as comical. However, they were soon shared across social media without any qualifiers. To make things more confusing, Trump shared one image – originally posted by a self-declared parody account – on his Truth Social platform.

Despite the attention, these AI images do not appear to have caused real harm – although this may change with Trump expected to surrender on April 4 following his recent indictment. Unfortunately, thanks to developments in AI, hyper-real images like these are not only here to stay, but will only get better. "I am convinced that this will definitely change, and will do so in the very near future," cautions Heidi Julien, Professor of Information Science at the University of Buffalo. "Already, many – perhaps most? – people cannot distinguish AI-generated images from actual photography."

If content intended as a joke can be so convincing, what happens when someone with bad intentions decides to create it?

Aren't AI-generated images easy to spot?

Followers of the discourse on AI images over the last year or so might be tempted to raise an eyebrow at this point. It's obvious, you might say. Look at the hands! The blurry elements! Check out the inhuman focused eyes or manic laughter! These have all been ways to spot AI-generated images – until now. Some of these "tells" were even present in the AI images of Trump's arrest, including strange body proportions, blurry faces of those around him, and bizarre facial expressions.

However, Eliot Higgins, co-founder of the investigative journalism platform Bellingcat – and creator of his own Trump AI images – pointed out to Logically that things are changing. "Midjourney was pretty bad with hands, lots of people with extra fingers in the earlier versions, but the recent update has cleared that," he said. "It's still very difficult to create images from anything but straightforward prompts using coming words and actions, but it can make a passable fake of something that's simple."

AI-generated image of Donald Trump being arrested, with hands of some officers showing anomalies. Label saying FAKE on bottom left.

Source: Eliot Higgins/Twitter, generated via Midjourney

Assistant Professor Anupam Guha, who researches AI at the Indian Institute of Technology, Bombay, highlights a different problem with this approach: "If a person is going to look at an image for 'tells,' they are already politically skeptical and therefore likely to question what they're seeing and inquire into it. But 90 percent of people will not do that anyway, which is why they get taken in by not just this kind of technically sophisticated disinformation, but far less convincing things," he argues.

Henry Ajder, a generative AI expert, agrees this approach isn't feasible given the exponential improvements to this technology over the last year. "If you think about Midjourney, for example [...] that was released less than a year or a year ago, but it's already on version five, with dramatic improvements on realism."

Put simply, the technology is being trained out of the kind of obvious errors that betray its AI origins.

"AI image generation software like Midjourney, DALL-E 2, or Stable Diffusion works the same on a superficial level," explains Pim Verschuuren, AI research engineer at Logically. "You write out how you want the image to look by constructing a textual prompt, e.g., 'A big fat horse holding a flamethrower standing on the moon,' and then pass this to the model, which generates an image following this textual description."

These are 'diffusion model' AI tools, trained to create images by seeing how their dataset is affected by a corruption process that gradually wipes out details in the data, following which they reverse-engineer it to recreate something close to the original. This allows it to create highly stable and realistic images based on prompts provided by a user.

Can't technology be the solution?

Given technology is creating these images, it is tempting to think technology will offer a solution to detect them. Professor Guha disagrees: "There is no foolproof technical solution, and it will never exist.” He explains that this is because of the nature of any technological detector, which will itself be based on machine learning systems, which in turn can only ever operate on the basis of probability. “We will never be in a position where you press a button and know for sure that something was generated by technology," he adds.

This is not to say there aren't attempts to do precisely that. Both Verschuuren and Ajder discussed the Adobe-led Content Authenticity Initiative, a community of tech companies, media houses, and NGOs trying to create an open standard to prove the provenance of a particular piece of content. If successful, according to Ajder, "you'll be able to essentially press a little i button on the corner of an image, and it will give you information about how it's being manipulated, when it was manipulated, where it was first taken or generated; the tools have been used, and potentially, who generated it, or created it."

However, there are significant challenges to such efforts. Ajder notes that the developers of the new generation tools – and bad actors looking to use them – "are always going to be on the front foot." On the other hand, "detection systems are going to have to be notified or find out that there are new iterations on there, and new ways that tools are being broken, before they can patch or respond." Additionally, much modern non-AI content and imagery has synthetic elements to it, whether through computational photography or post-production work. Training technology to detect synthetic elements with the intent of identifying AI-generated images could be very challenging.

He refers to the video of a Myanmar ex-minister's "confession," released by the military junta after the 2021 coup. "People in the country were suspicious that it was a deep fake; they ran it through an online detection tool. The detection tool said yes, it is a deepfake – but it wasn't. And that led to furthering of disinformation around the content of the video," he says. Researcher Sam Gregory noted the deepfake detection tool used to assess the video asserted with 90+ percent confidence that the video was fake. A more careful analysis by specialists suggested it was more likely just to be a poor quality video of a forced confession.

Screenshot of account from Myanmar sharing screenshot of deepfake detector results on video of ex-minister's confession.

Source: Twitter

Ajder also points out that even if we assume a 99.9 percent accuracy by detection tools across audio, video, and images, given the scale of online content, that 0.1 percent failure rate is still significant. "If you're the BBC or you're CNN, or even a government agency, and you're trying to make decisions in timely or fast-paced scenarios, relying on these systems could end up with you making pretty poor judgments or taking bad actions based on poor information," he warns.

Guha also notes that relying on technological solutions will require implementation and enforcement by social media platforms and tech companies, whose track record does not inspire confidence, especially in countries in the Global South. He cites the failures in Facebook's content moderation in Myanmar and Assam, India, as examples of the dangers of adopting such an approach.

If not tech, then what?

The pessimism in tech's ability to solve its problems is widespread among experts, with Professor Julien stating, "I don't believe that it is realistic to expect developers of image generation tools to accept any responsibility for the interpretation or results of their efforts." This pessimism is not unfounded: none of the major AI image generator tools – OpenAI, Stable Diffusion, and Midjourney – are part of the Content Authenticity Initiative.

Professor Julien suggests that a better area of focus is on media and digital literacy, which "can help combat the mis/disinformation that uses AI-generated digital images." She believes "digital literacy should be taught through school, from kindergarten forwards, so that everybody develops a healthy skepticism about mis/disinformation and social media," citing Finland as an exemplar. Professor Guha broadly agrees, but acknowledges that this requires a "challenge to the psychology that we all have, that while the written word can be false, images or videos must be true."

Digital and media literacy involves training people to take steps to verify the information presented to them, rather than blindly accepting it as true. These steps include conducting a reverse image search using Google or Yandex, which could confirm if a particular image is an actual photograph taken by a reliable media house – especially with major events. Basic web searches of the event in question would also work for the same reason.

As noted, there are limitations to such an approach. We live in a world where distrust in the media and institutions is high, and the information people consume is highly polarized. According to Professor Guha, this is where we might see a difference between the effects AI-generated image technology might have in the Global North when compared to the Global South.

While the technology itself is easily available across countries with API access public in many cases, he argues that "some populations in the Global North are more politically skeptical as compared to those in the Global South," which would allow the former to more readily identify synthesized misinformation. This links back to media literacy, which he notes can be quite poor in countries like India where the discourse around mis/disinformation focuses on the simplistic idea of "fake news."

Is there a chance we're all overreacting a bit?

People do not seem unduly concerned about AI images. Catholics have not come out in the streets to protest Pope Francis' sacrilegious new wardrobe. Even a potentially more fraught image – French President Emmanuel Macron caught amid ongoing pension reform protests – disappeared into the internet ether despite some initial uncertainty. 

Source: Jeremie Galan/Twitter, generated via Midjourney

Have fact-checkers and tech journalists jumped the gun? Eliot Higgins is inclined to agree and suspects AI is "more likely to be used to create political memes rather than have any serious news impact. No serious person is going want to be caught out promoting fake images as real, and these things aren't difficult to fact check, so even fringe news sites won't want to be caught out using them." 

It is worth noting that bad actors often use fake screenshots and digitally altered images to spread misinformation without AI. As Professor Julien points out, though, new tools' scalability may be a problem, as previously, creating media "required expertise and resources beyond most people's means." 

Henry Ajder, however, believes that how we inform people about this kind of technology is extremely important. "If we start telling people, 'Look, you just can't trust what you see anymore. Everything could be a deep fake;' this services this quite pernicious dynamic we've seen with this concept called the 'Liar's Dividend,' which provides plausible deniability for bad actors to dismiss inconvenient or incriminating videos, audio clips, and other content," he explains. While we need to educate people about this technology and what it can do, we have to make sure it doesn't feed into a "race to the bottom," which can be exploited by bad actors looking to take advantage of the prevailing climate of distrust.

It may well be that in the future, this flurry of concern from journalists and fact-checkers about AI-generated images will be quietly filed in the Cabinet of Things We Never Talk About Again, somewhere next to Y2K. But in a world where people have decided to once again believe that the earth is flat – a theory disproved by ancient civilizations who had just about invented mathematics – it would be wise to focus on media literacy programs, built-in AI identifiers on images, and holding tech companies accountable for their creations.

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before