By: annet preethi furtado
December 8 2023
(Illustration by Matthew Hunter)
Blurring the lines between fact and fiction, mis/disinformation dominated the discourse on several issues on social media shaping narratives and influencing opinions worldwide. With platforms like X (formerly Twitter) undergoing major changes this year – under Elon Musk’s ownership – monitoring the credibility of information became harder.
From the Israel-Hamas conflict to an ethnic conflict in India, 2023 saw verified and legitimate content about almost every major event being challenged with a deluge of mis/disinformation wreaking havoc on the information ecosystem.
Here’s a glimpse of major narratives and trends that we saw this year.
A prominent and disturbing trend that marked the mis/disinformation space this year was the popular online narrative claiming that Palestinians were using ‘crisis actors’ and faking injuries, deaths, and funerals amid the recent Israel-Hamas conflict. Mentioning ‘Pallywood’—the pejorative portmanteau of ‘Palestinian’ and ‘Hollywood’—several claims used unrelated behind-the-scenes footage, miscaptioned videos, and clips from films to mock and downplay civilian harm in Gaza.
Claims mocking civilian deaths in Gaza were shared by several social media accounts.
(Source: X/Tiktok/Instagram/Modified by Logically Facts)
As the use of digitally generated images increased this year, artificial intelligence (AI) also found its way into the ongoing conflict as a way of muddying the water and influencing opinions. AI-generated images in the Israel-Hamas conflict were found to primarily garner support for one side of the conflict, while emotionally powerful images, like a child under rubble, were used to reinforce existing narratives, emphasizing the dire situation in Gaza.
A Logically Facts analysis also revealed that a substantial proportion of global online discourse on the conflict originated from accounts in India, using hashtags like #IsraelUnderAttack and #IStandWithIsrael to propagate Islamophobic rhetoric and gain political advantages.
The rising tide of misinformation also took on communal hues in countries like India, eventually leading to much real-world harm.
Early on in the ethnic conflict that shook the northeast state of Manipur, narratives seen on social media falsely alleged that the conflict was motivated by purely religious factors and aimed at perpetrating violence against Hindu Meiteis. Logically Facts spoke to experts who pointed out that there were several reasons behind the violence and calling it only a religious issue was wrong.
Screenshots of false claims shared in the aftermath of violence in Manipur. (Source: X/Modified by Logically Facts)
The misinformation surge only grew with time as Manipur remained embroiled in ethnic clashes between the Meitei and Kuki communities. Aided by the internet shutdown in the state, misinformation swirled on social media and made its way into the physical realm as well. Fabricated stories circulated in the state, resulting in horrifying acts of violence against women, including an incident that involved a mob publicly parading and sexually assaulting two women from the Kuki-Zomi community. This violence stemmed from misinformation surrounding the "Churachandpur case,” wherein an unrelated image of a young woman's body from Uttar Pradesh, wrapped in plastic, was shared to falsely allege that a Meitei woman was sexually assaulted and murdered by members of the Kuki tribe.
The relentless spread of mis/disinformation in the country even saw Muslims being blamed for a tragic train accident in Odisha’s Balasore. On June 2, after three trains collided in Balasore, leaving over 288 people dead and over 1,000 injured, Islamophobic narratives suggesting a possible sabotage by the Muslim community reigned on social media. A Logically Facts report noted a significant surge in the usage of the hashtag “#trainjihad” within hours of the incident. While the interaction with these narratives was limited, a few social media users engaged in speculative discussions regarding the incident, claiming without evidence that the collision was not a mere coincidence but rather part of a broader "conspiracy" orchestrated by proponents of "train jihad."
In a similar fashion, some social media users falsely alleged that a Muslim station master named Sharif, working near Balasore, was on the run. Meanwhile, some others implied a “conspiracy” by the Muslim community by misidentifying the Bahanaga ISKCON Temple near the railway tracks as a mosque. This prompted the Odisha police to warn of potential legal consequences for those fostering communal disharmony through misleading posts.
False communal claims were spread widely after the Balasore train accident. (Source: X/Modified by Logically Facts)
In the aftermath of the devastating earthquakes that struck Turkey and Syria in February, one narrative that grabbed the imagination of social media users was that HAARP, short for High-frequency Active Auroral Research Program, ‘orchestrated’ the seismic activity that led to the tragedy. This was one of the many times that HAARP, a research facility in Alaska that studies the ionosphere using radio transmitters, was falsely held responsible for a natural disaster. Firmly established scientific consensus about HAARP being unable to cause anything except for perhaps minor effects in the ionosphere, however, did little to deter conspiracy theorists from linking it to several natural catastrophes witnessed in 2023. Often riding on the back of misleading visuals, HAARP was linked to floods and wildfires in Greece, flash floods in Libya, and the September earthquake in Morocco.
Another tool used to falsely claim that several natural calamities were actually human-made was Directed Energy Weapons (DEWs), with many insinuating that the technology— which is still at a nascent stage and can be simply understood as a powerful laser beam—was used to trigger fires and even earthquakes in 2023.
This narrative was widespread in the aftermath of the Maui wildfires that ravaged the island U.S. state of Hawaii (spelled Hawai'i by locals) in August. Several social media accounts alleged that Maui had been intentionally targeted by DEWs, using unrelated and altered visuals of electrical explosions, rocket launches, and meteor flashes, among others. Some users also falsely alleged that billionaires like Bill Gates ‘orchestrated’ the fires to grab land from Maui’s indigenous populations, claiming that the fires did not hit the elite's mansions and estates. Another popular claim falsely asserted that all blue-colored structures, including Oprah Winfrey’s house, survived the fire, implying that the calamity was sparked by DEWs. Such claims around DEWs were termed “nonsense” by experts who told Logically Facts that DEW technology cannot trigger disasters or attacks without detection.
Screenshots of misleading and false claims linking HAARP and DEW to natural calamities.
(Source: X/TikTok/Modified by Logically Facts)
The Maui wildfires also breathed new life into several other conspiracy theories. Several social media accounts claimed without evidence that the U.S. government intentionally started the fire to destroy small businesses and further plans to implement smart cities, a narrative linked to conspiracy theories surrounding 15-minute smart city plans and the World Economic Forum (WEF).
False claims regarding 15-minute smart cities shared online. (Source: TikTok/Modified by Logically Facts)
Further, Logically Facts found that X, utilizing its pay-to-get-verified model X Premium, became a hub for right-wing accounts spreading false information and conspiracy theories, including “Pizzagate” and “Adrenochrome,” in the aftermath of the wildfires. Believers of the Adrenochrome conspiracy theory, which has anti-Semitic roots, and ‘pizzagate’ believe that children are being trafficked by "global elites."
Climate change denial also played a big role in the spread of such conspiracy theories, with online posts wrongly claiming Maui wildfires were a false flag operation or staged. This narrative also abounded during the 2023 summer Greece wildfires, fuelling anti-migrant sentiment. Conspiracy theorists falsely put the blame on migrants and refugees for “committing arson” in line with their old habit of alluding wildfires to arson, downplaying the role of climate change, and conflating acts of carelessness, like lighting up cigarettes and barbeques, with “some dark plan to burn the country,”
A common thread between all these events was the usage of old and unrelated images, which has become a recurring trend in the aftermath of global crises. After the Turkey-Syria quakes, old videos and unrelated images were shared to claim they showed tsunamis and buildings being destroyed in the disaster’s aftermath; similarly, the earthquake in Morocco was followed by a deluge of misattributed videos circulating on social media. Misinformation around the Israel-Hamas conflict and the violence in Manipur also relied heavily on the use of old and unrelated visuals, a phenomenon noticed during other events such as the Russian invasion of Ukraine and the 2023 French riots.
Social media platforms struggle to effectively address and curb the rising menace caused by unverified content and manipulated visuals, often amplified by verified accounts. The sheer volume of content uploaded to these platforms daily further compounds the challenge.
Confirmation bias and cognitive biases such as attribution bias and error, black-and-white thinking, overgeneralization, and self-fulfilling prophecy support the acceptance of misinformation, psychologists told Logically Facts. Given these scenarios, increasing media literacy and smart use of technology can help fight mis/disinformation. Here are some tips on how to discern AI-generated images, how to protect yourself from climate misinformation, and some skills you can sharpen: click restraint; thinking critically; basic geolocation, and reading beyond headlines.
(Edited by Priyanka Ishwari)