By: christian haag
July 14 2023
Source: Alamy/Reuters Connect
Sensus, a Swedish association for popular education, published a fully AI-generated website called “Nyhetsveckan” in June 2023. By all appearances, it looked like a normal news website; however, all of the articles on the website were completely fake. According to Sensus, its purpose was to start more conversations about AI, democracy, and disinformation and to see how AI can affect the flow of information and democratic processes. The reactions in Sweden came quickly, especially on Twitter. Some found it enlightening, but others called it a “stupid idea” and that Sensus, in practice, had created a troll factory.
An example of a Nyhetsveckan disinformation article. The headline translates to “Warning about Nanotechnology in Vaccines” - Scientists concerned about surveillance of vaccinated individuals. This claim is false and has been Fact-checked by Politifact.
The website currently features approximately 50 articles, some of which are relatively harmless. However, several articles disseminate harmful misinformation narratives, including articles about the WEF’s 2030 agenda and claims concerning vaccines: both recurring false narratives fact-checkers encounter on a daily basis.
The site offers a free email newsletter and claims to have been available in a newspaper format since 1987, and online since 2008. Under the about us page, the website states that it offers “The truth, unfiltered,” and claims objective reporting without “the filter used by many of our colleagues,” similar to actual fake news sites, along with unverifiable numbers about its popularity and reach.
Despite this authentic appearance, Sensus has implemented disclaimers throughout the site. Upon entering the website, a red warning label appears that states the website is a fictitious AI-created webpage. This appears each time a user visits other pages contained within the site, with the purpose of reiterating that the articles and website are not real.
The red label on Nyhetsveckan. Logically Facts has translated the label using the Google Translate Chrome Plugin
Sensus explains on a separate “about us” page that the site took one person eight hours to create utilizing two AI systems that generated everything from text to pictures and authors. Here, Sensus acknowledges the risk of misinformation from these articles and has been clear that the material is fictitious and AI-generated. The webpage was primarily developed for a panel discussion at Almedalen, an annual democratic forum in Sweden that gathers politicians, organizations, and businesses, and acts as a showcase for educational material.
While the red label offers an important disclaimer, it is not visible when closed and only shows a red info sign, so there is potential for misuse of the material on the website. It is easy to screenshot an article with the label cropped out to then be reshared as misinformation – as is the case with much of the falsehoods we encounter online, which Sensus has not considered.
Sensus says that they wanted to create a more tangible way of showing the possibilities and dangers of AI and that anyone with a computer can build their own troll factory.
Shortly after the launch of Nyhetsveckan, Brit Stakson, Swedish-Norwegian author and media strategist, voiced criticism about the website on a podcast by Swedish news outlet Blankspot. Stakson elaborated on her criticisms to Logically Facts, pointing out that it is “both unethical and irresponsible to produce and spread more lies to "educate" about how disinformation looks and works.” She is pointed in her observations that this is a study association, and such organizations “don’t need to invent this kind of thing.” Rather, they should use the tools already at their disposal to increase people’s knowledge of AI and its potential impact on democracy.
Digital strategist Siavash Vatanijalal, who created the website, emphasized to Logically Facts that its purpose was to show the capacity of AI and how easy it is to create a fake disinformation website, believing that a practical, hands-on example – such as the website – is better than a theoretical one. Vatanijalal says that the purpose of Sensus is to support and provide opportunities for non-formal learning and activities, which strengthens democratic development.
Vatanijalal does recognize the criticism of producing harmful content and admits they could have been different. Regarding the red labels, he points out that they could also have been removed using Google Chrome’s inspector tool, which allows you to modify the source code. But, knowing this, and the ease at which people can disseminate false screenshots and more, perhaps they ought to have reconsidered launching disinformation using such a platform.
False information, particularly that which is harmful, is pervasive online, and this continues to grow all the more with the help of AI. While Sensus’ original aim was to educate people on the dangers of AI and disinformation, it has clearly recognized the harm in producing its own disinformation: the website has been locked since July.
Science Advances suggests that AI-generated disinformation could possibly be more convincing than that written by humans. One of the study’s co-authors, Dr. Giovanni Spitale, a researcher at the University of Zurich, says, “The fact that AI-generated disinformation is not only cheaper and faster, but also more effective, gives me nightmares.”
This is supported by research from Logically Facts, which demonstrates a clear need for more fact-checking operations, particularly around elections and political events. The study also revealed more needs to be done in educating people on how to debunk false information, with 66 percent of those surveyed wanting to learn more about spotting false claims. Logically Facts has written extensively on how to detect AI image misinformation.
Consequently, there is a dire need for more education on AI disinformation, along with knowledge sharing around the issues to help stem the tide of false information. It is commendable that Sensus wants to educate more about this topic, but its method was questionable. Basic safeguarding such as keeping the site closed to the public, but open to other researchers would have helped to prevent any spread of misinformation, retaining the site as an educational tool. Rather than releasing harmful claims into the wild, these could have been accompanied by fact-checks from reputable organizations.
There are many ways to fight mis- and disinformation, including popular education and similar initiatives, such as media literacy courses and developing critical thinking skills. Ultimately, greater collaboration between fact-checkers and public educators is essential; every part of society has a potential role to play in the fight against harmful content.