Saint Von Colucci case exposes the dangers of AI-related misinformation

Saint Von Colucci case exposes the dangers of AI-related misinformation

By: anurag baruah&
May 5 2023

Share Article: facebook logo twitter logo linkedin logo
Saint Von Colucci case exposes the dangers of AI-related misinformation

On April 23, 2023, journalists worldwide received a press release announcing the demise of Saint Von Colucci, an actor who allegedly died in Seoul after undergoing multiple surgeries in an attempt to look like one of the BTS boy band stars.The Daily Mail reported on Colucci  as an exclusive story on April 24, 2023. Reputed media houses including The Independent, Mirror, Gulf News, NDTV, The Economic Times, and India Today followed suit. 

The publications quickly retracted their story when they discovered Colucci was a figment of someone’s imagination, an AI-generated character. While unsuspecting online users are routinely falling prey to disinformation stemming from AI-generated images, seasoned media houses publishing such reports highlights how AI-generated mis/disinformation narratives are increasingly getting difficult to spot even by trained eyes.

Who is Colucci?

Emerging narratives on social media claimed Colucci was a 22-year-old Canadian actor who died after spending $220,000 on 12 plastic surgeries in an attempt to look like the BTS’s KPOP star Jimin. Users alleged Colucci did this to play Jimin for a U.S. streaming network but he died at a South Korean hospital after developing complications from the surgeries. 

Source: ghostarchive/Screenshot of the now-deleted Daily Mail article

A Daily Mail article even quoted Colucci’s so-called publicist saying that the actor had gone into surgery to remove the jaw implant that was put in last November as he had developed infections, resulting in complications that lead to his death. The paper reported on how Colucci was struggling to get into the South Korean music industry and felt discriminated against because of his “western looks.”

It was later revealed, all this information was a fabrication and nothing but an elaborate hoax involving AI-generated images. The initial press release written in poorly-structured English was sent from a PR agency called HYPE Public Relations.

The red flags

Raphael Rashid, a Seoul-based freelance journalist, first flagged multiple issues with the press release: links to Colucci’s social media accounts weren’t opening, and HYPE’s  website seemed unfinished and recently registered. “All the red flags were there. All the inconsistencies. Yet many large media orgs believed the story and wrote about it without any fact checking,” he tweeted. He later wrote for Al Jazeera on the issue. 

Though Colucci was described as a fairly popular songwriter having supposedly written for a number of K-pop stars, there was very little online presence with no public mournings. The photos of Colucci are blurry and exhibit tell-tale signs of being AI-generated images with one of the photos returning a 75 percent result on Maybe’s AI Art detector. 

The press release also announced the name of the hospital in which Colucci died as “Seoul National Hospital” which does not exist and is presumably a reference to Seoul National University Hospital. Colucci’s Instagram page, @papaxxzy, has been deactivated and reactivated multiple times in the last few days and at present stands deactivated. 

Al Jazeera also reported an issue with a press release connected with Colucci circulated last year in which the alleged actor was described as “the second son of Geovani Lamas, the CEO of IBG Capital, Europe’s top hedge fund company.” However, no official presence of Lamas could be found online and the top search for IBG Capital leads to an investment firm from Arizona, U.S.

Another press release from April 1, claims that Colucci’s parents were suing the South Korean talent agency that allegedly employed their son and the journalists called it a hoax. It was published by NextShark, a news site covering Asian American news, and mentioned a person called Adriana Ruthman as the sender. The press release also noted that Colucci’s music projects were halted because of a “conflict of interest between his American and South Korean management.” 

However, NextShark mentioned that when they tried to reach out to the PR companies for further information, only Ruthman’s mail seemed to be working as others returned with an “Address not found” notice from Google. 

“The biggest clue was the press release announcing Saint Von Colucci was ‘intubated’; an artist’s management would never do this,” Variety quoted Riddhi Chakraborty, assistant editor at Rolling Stone India, as saying. 

While there are multiple applications and programs that are being used to churn out AI-generated images, Midjourney has stood out in terms of popularity and usage. It has gained more than 13 million members within a year. Reports suggest that most of the fake AI-generated viral images of former U.S. President Donald Trump with the Pope in a puffer jacket were created using Midjourney. 

Midjourney had to eventually halt free trials of its service because of a sudden increase in users. Midjourney CEO and founder David Holz recently announced that it had to be done because of  “extraordinary demand and trial abuse.”

What does AI-generated mis/disinformation look like?

Logically Facts has debunked and analyzed a series of mis/disinformation narratives involving AI-generated images over the past few months. This varied from AI-generated images claiming to show former U.S. President Donald Trump getting arrested during his recent indictment to Pope Francis in a puffy white luxury jacket. 

Others included a debunk involving an AI-generated image claiming to show Julian Assange in a U.K. prison. One of the first and most popular AI-generated images that surfaced in the aftermath of the devastating Turkey-Syria earthquake claimed to show a real Greek firefighter holding a child, which was debunked by Logically Facts. In India, social media users shared multiple AI-generated images of Prime Minister Narendra Modi. 

The fact that AI-image generators like Midjourney can be used to generate images of U.S. President Joe Biden and Russian President Vladimir Putin but not Chinese President Xi Jinping; highlights how moderation rules can make important figures quite susceptible to narrative-building using mis/disinformation and hate speech. 

The crisis-like situation is perhaps best reflected by a recent initiative involving prominent researchers, technologists, and public figures starting an open letter requesting at least “a moratorium of at least six months on the training and research of AI systems more powerful than GPT-4” as they ask, “Should we let machines flood our information channels with propaganda and untruth?” 

As AI-generated images are increasingly blurring the lines between truth and fiction, Geoffrey Hinton who is known as the ‘Godfather of AI’ recently quit Google warning about the dangers of misinformation involving AI. 

Would you like to submit a claim to fact-check or contact our editorial team?

Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before