The perfect storm of AI fakes and paid Twitter verification

The perfect storm of AI fakes and paid Twitter verification

By: nikolaj kristensen&
May 26 2023

Share Article: facebook logo twitter logo linkedin logo
The perfect storm of AI fakes and paid Twitter verification

On the morning of May 22, 2023, a fake AI-generated image circulated on social media, suggesting an explosion had occurred outside the Pentagon, headquarters of the U.S. Department of Defence. On Twitter, the image was soon shared by several accounts marked with Twitter’s blue checkmark, including an account posing as Bloomberg News.

When the Arlington County Fire Department, the Pentagon, and fact-checkers later refuted the incident, the purported explosion had already made headline news in India and, seemingly, caused a drop in U.S. financial markets.

It’s not like no one saw this coming. Since Elon Musk took over Twitter and began changing the blue tick from a symbol of notability and authenticity to one of paid subscription. Academics, tech reporters, and information security experts alike warned it could lead to a surge in misinformation. Paired with the rise of generative AI technology, such incidents look inescapable. Democratic processes and the markets may pay the price. 

A backdoor for bad actors

Twitter began verifying accounts in 2009 as a way for users to make sure that notable accounts were who they claimed to be. As Elon Musk took over as head of Twitter in October of last year, one of his first actions was to change the Twitter Verified blue checkmark. Instead of granting it to accounts based on activity, notability, and authenticity, any account that pays an $8/month subscription to Twitter Blue can now obtain the blue tick. Experts warned that the change might lead to a surge in misinformation on the platform, and early on, the pay-for-verification blue tick system was a mess, leading to imposters and causing drops in stock prices

One expert who has warned against the changes is Michael Biddlestone, a Postdoctoral Research Associate researching and developing misinformation interventions at the Cambridge Social Decision-Making Lab. When he learned of Monday’s fake Pentagon image being amplified by blue-ticked accounts, he says he was “disturbed but not surprised,” as it shows the recipe for disaster that is the rise of AI-generated imagery and paid verification.  

“Not only is AI image generation technology advancing at a rapid pace, but this coincides with worrying vulnerabilities in platforms like Twitter,” he told Logically Facts. “The paid account verification approach opens a backdoor for the highest-bidding bad faith actors to amplify their polarising content on trusted accounts, making people less skeptical of false information that is already hard to spot.”

The anonymity and false prestige that the blue tick now gives allows actors with harmful intents to mask polarising content as reliable information, Biddlestone explains. While the original process for verifying accounts was by no means perfect, it did provide a clear symbolic marker of authenticity and trustworthiness, explains Timothy Graham, a Senior Lecturer in Digital Media at the Queensland University of Technology (QUT): “The Original verified blue ticks served as a friction for the spread of news content – inattentive, content-hungry users could easily evaluate the authenticity of a source because the blue tick was a rare commodity.” 

With the blue tick now effectively meaningless, users are more likely to accidentally share highly emotional, sensational content, such as an explosion outside the Pentagon, said Graham. 

A central position in the media ecosystem

Apart from spreading by blue-checked accounts, the fake image was also shared by the Twitter account of Russian state media outlet RT in a post that has since been deleted. Several Indian news outlets shared false reports of the explosion. One of them, Republic TV, showed the fake image on air, citing RT’s tweet. 

This is the real problem of the paid verification model, said Graham, namely that false content can now more easily be “traded up the chain”: “It’s one thing for a fake video or photo to circulate in fringe communities; quite another thing for it to be picked up by mainstream media. Once a reputable source amplifies it, network effects start kicking into gear and the potential for exponential diffusion of the content is increased. The original blue tick scheme reduced the probability that content could be traded up from fringe to mainstream. That bridge is now easier to cross, unfortunately,” he said.

This is especially problematic as Twitter is central in the global media ecosystem. According to Graham, the platform has become a sort of infrastructure for journalists, mass media, politicians, and political discourse.

“There’s not a TV news station in the world that doesn’t have a Twitter account associated with it. Journalists and editors listen to social signals on Twitter to know what to cover in their reporting and how to frame it. So decisions about the design of Twitter have seismic consequences – the blue tick is simply one of these. If platforms are not regulated and are simply left to the whims of self-appointed CEOs, I do not hold high hopes that the situation regarding mis- and disinformation will improve,” said Graham. 

Republic TV later excused the report, tweeting: “Republic had aired news of a possible explosion near the Pentagon citing a post & picture tweeted by RT. RT has deleted the post and Republic has pulled back the newsbreak.” The RT press office similarly released a statement saying: “As with fast-paced news verification, we made the public aware of reports circulating and once provenance and veracity were ascertained, we took appropriate steps to correct the reporting.”

According to Twitter’s current policy, a criterion for getting the blue checkmark - apart from being a Twitter Blue subscriber - is that an account "must have no signs of being misleading or deceptive." One blue tick account that shared the fake image on Monday was named Bloomberg Feed and used the business and marketing logo of the news site Bloomberg. The account was later suspendedAnother account that shared the image is known to post conspiratorial content, while a third shared another AI-generated fake - of an explosion at the White House - just last month. Both accounts are still up and running. 

Twitter responded to Logically Facts’ request for comment with an auto-reply of a poop emoji.  

Affecting the market

The faked Pentagon explosion affected the financial markets, as the S&P 500 dropped 0.3 percent around the time the image went viral. This is not the first time Musk’s Twitter experiments have caused a stir in the market. When the paid verification first launched in November 2022, a “verified” account posing as the pharmaceutical company Eli Lilly tweeted that all insulin would be free, causing a drop in Eli Lilly stock prices. The incident resulted in Eli Lilly pausing all its ad campaigns on Twitter. 

Similarly, defense contractor Lockheed Martin saw company shares drop 5.5 percent, as a blue-checked account with the handle @LockheedMartini tweeted that the company would stop selling weapons to countries like Saudi Arabia and Israel while waiting for investigations into the countries' human rights records. Adam Kobeissi, editor-in-chief at industry publication The Kobeissi Letter, told Associated Press that the market has become increasingly reactive to headlines due to algorithmic trading that is designed to react to news in an instant.

Signs of a generative AI model

The image of the explosion at the Pentagon shows several signs of coming from a generative AI model. Siwei Lyu, a computer science and engineering professor at the University at Buffalo School of Engineering and Applied Sciences, told Logically Facts. Running the image with an algorithm designed to detect if an image comes from a generative AI model, Lyu’s analysis shows that the building in the photo has artifacts typical of a generative AI model. “Note particularly the windows on the first and second floors. They are irregular and are not consistent with real-life buildings,” he said.

There are numerous signs that the image has been manipulated. The fence seems to be hovering above the ground in some places, only one side of the five-sided Pentagon building is visible, the fence and the pavement melt together, the pavement and the grass melt together, the temporary fence merges with the permanent fence, and a lamppost appears to be disjointed at the bottom. Lyu warns that the generative AI models will continue to improve with accelerated speed. “In terms of detecting these fakes, current detection methods are generally capable of catching low-quality fake images like this. But they are not robust and cannot handle well-crafted fakes with manual post-processing to remove the artifacts or fakes made with unknown models,” he said.

According to Timothy Graham of QUT, what the generative AI models have done is make the cost of producing fakes so low that anyone with a computer can do it – it no longer requires intellectual labor or specialized skills: “What we are seeing is that with some probability there will be a perfect storm of conditions on any given day that lead to a fabricated image making its way to the top of the media ecosystem and having flow-on effects for financial markets, etc. AI has increased that probability because of volume and ease of production.”

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before