Social media engagement: Misinformation, anger, and algorithms

By: alexander smith&
November 17 2023

Share Article: facebook logo twitter logo linkedin logo
Social media engagement: Misinformation, anger, and algorithms

Source: Freepik.com

Social media is driven by engagement – getting as many clicks, likes, and shares as possible. The more you engage with a social media platform, the more money it can make through advertising to you. Social media companies figure out what kind of content you will engage with the most and profit from that.

It’s beneficial for social media companies to value engagement because it affects their bottom line. According to Nieman Lab, “Maintaining high levels of user engagement is crucial for the financial model of social media platforms. Attention-getting content keeps users active on the platforms. This activity provides social media companies with valuable user data for their primary revenue source: targeted advertising.”

Social media platforms do have measures in place to combat misinformation. X (formerly Twitter) uses its community notes initiative to add additional context to misleading or misrepresented information. TikTok and Facebook detail how they monitor misinformation and what kind of content they remove from their platforms. YouTube enforces its content policies using machine learning and content moderators.

As the Israel-Hamas conflict continues and misinformation spreads, social media platforms are attracting attention regarding the roles their algorithms play in what kinds of content people see. In a recent blog post, TikTok clarified that its “recommendation algorithm doesn't ‘take sides’ and has rigorous measures in place to prevent manipulation.”

Facebook and TikTok use third-party fact-checking services, including Logically Facts, to flag and add context to posts classified as misinformation.

Content creators also make money from people engaging with their content. But what makes something go viral and create engagement? It could be a quality work of art, a hit song, or a funny video that people share because it’s entertaining.

But studies show that being entertained isn’t the main reason for online engagement – it’s anger.

When media organizations realized this, clickbait headlines and rage-farming became commonplace, all in the pursuit of higher engagement figures. For some, this anger is a way of fighting misinformation, with one TikTok creator telling Insider, "Reporting on science, misinformation, and crimes isn't rage-baiting… Rage-baiting is deliberate manipulation." 

However, research points to anger being a driver in the rate of people believing “politically concordant misinformation.” Anger has also been recognized as fuelling COVID-19 misinformation, where “people who felt angry were more vulnerable to misinformation and actively engaged in disseminating false claims about COVID-19.”

There are many reasons why it’s so easy for us to fall for online misinformation, and misinformation spreaders are very good at tapping into our internal biases and emotions. Misinformation is often designed to elicit an emotional response – mainly anger. When we’re angry, we want to share the source of that anger with our friends and tell them they should be angry too. As one Harvard research paper puts it, “Anger mobilizes and, in turn, angry individuals easily rationalize their act of sharing misinformation by deeming it trustworthy.”

When harmful content meets money 

Since taking over Twitter and renaming it X, Elon Musk has introduced an ad revenue-sharing model that allows paid subscribers to receive real money based on the number of impressions their posts receive. This could incentivize creators to make quality, engaging content and be rewarded for their creativity. But on a platform where misinformation already thrives, it’s quicker and easier to be controversial.

Soon after the introduction of the ad revenue sharing model, one far-right conspiracist, known for spreading QAnon and PizzaGate conspiracies, was found to have received revenue payouts even after having his account banned for sharing child sexual abuse material – a ban that was reversed after intervention from Musk himself. MediaMatters has also reported that advertisements for major brands are being shown on explicitly pro-Hitler accounts, which are eligible to receive a share of ad revenue through the share program. The accounts were later suspended.


MediaMatters’ Alex Kaplan identifies a QAnon-linked account eligible for X revenue shares (Source: X)

As well as allowing users to make money from ad revenue, social media platforms enable them to profit directly from individuals. In light of the recent Qu’ran burnings in Sweden, TikTok has been accused of profiting from broadcasting footage of the incidents. According to the Swedish newspaper Dagens Nyheter, “The broadcasts have received tens of millions of views and, according to Salwan Momika, he can earn up to SEK 3,000 (269 USD) per broadcast by accepting gifts from viewers in exchange for real money. TikTok itself takes half the value of the gifts and thus profits from the activity of the Qu’ran burners.”

Does mis/disinformation get more engagement than regular content?

Studies show that misinformation receives more engagement than other kinds of content. Wired reports that, on Facebook in particular, right-wing fake news receives higher engagement levels than other news sources. The Washington Post has reported that during the 2020 election, “news publishers known for putting out misinformation got six times the amount of likes, shares, and interactions on the platform as did trustworthy news sources, such as CNN or the World Health Organization.”

An MIT study found that “false news stories are 70 percent more likely to be retweeted than true stories are. It also takes true stories about six times as long to reach 1,500 people as it does for false stories to reach the same number of people.”

It’s clear that misinformation spreads more quickly and easily than other kinds of content, but it’s also important to understand why people share it in the first place.

The psychology of misinformation and why we share it

Evidence shows that, in many cases, misinformation is shared because platforms make it so easy. A study by the Integrity Institute examined how different social media platforms amplified misinformation. The study determined a Misinformation Amplification Factor (MAF) score for each platform based on how easy it is to amplify misinformation there. 

The study results show that “Twitter and TikTok have the highest MAF… On Twitter, the retweet feature has much less friction than other platforms’ sharing options. A tap and a confirmation is all that is required to retweet.”


Results of a study displaying how easy it is to amplify misinformation on different social media platforms. (Source: integrityinstitute.org)

In contrast, sharing a post on Facebook “requires that a user pick which method they want to use to share it, either a direct message or new post, which leads to a new post prompt where there is an expectation to add commentary. The low friction sharing option on Twitter allows misinformation to spread far beyond the followers of the account that uploaded it.”

Aside from how easy it is to share information without verifying whether or not it’s true, there’s a social aspect to the spread of misinformation. A study by UC Berkeley’s Kidd Lab found that “study participants were more likely to agree or disagree with a statement after seeing evidence that the belief was more popular than they had expected. Some who were on the fence about a controversial issue changed their minds based solely on the number of endorsements the statement received.”

How algorithms come into play

The content we see in our social media feeds is not only what we follow. Social media companies use algorithms based on what you have liked, watched, or shared to show you similar and related content.

When you interact with an app, it might ask you to choose a few topics you're interested in, and the more you interact with it, the more it learns about what you like. It then offers you content that it thinks is relevant to you. Algorithms use different signals to measure engagement, including how long you watch a post, what you "like," and the accounts and hashtags you follow.

Social media algorithms push controversial content because it will get more engagement. According to Nieman Labs, “Sharing widely read content, by itself, isn’t a problem. But it becomes a problem when attention-getting, controversial content is prioritized by design.”

What can social media platforms do about it?

Legislation to make social media companies more proactive about fighting misinformation, such as the EU Digital Services Act, which platforms with more than 45 million users must comply with, and the not-yet-enacted U.K. Online Safety Bill, are designed to bring about drastic changes. However, since the platforms themselves are designed in a way that encourages the spread of misinformation, one solution could include modifications to the way people interact with them.

A study from UCL found that “The addition of ‘trust’ and ‘distrust’ buttons on social media, alongside standard ‘like’ buttons, could help to reduce the spread of misinformation.” This would allow users to rate the trustworthiness of a post and, ideally, reduce the spread of untrustworthy information. 

Laura K. Globig of the Affective Brain Lab and co-author of the study told Logically Facts, “The spread of misinformation on social media platforms is facilitated by the existing incentive structure of those platforms. Users aren’t motivated to share true posts and to avoid sharing false posts. Instead, they are incentivized to maximize the engagement they receive.”

“While existing research shows that users can actually discern true from false content relatively well - they have little incentive to do so - especially because misinformation often generates more engagement (i.e., rewards) than reliable posts. And as a result, people will often share misinformation even when they do not trust it.”

Globig’s study set up “simulated social media sites” with trust/distrust buttons as well as the like/dislike options. The study found that people used trust/distrust more overall and thus were more likely to reshare true posts.

As well as being bad for individuals who consume and share information on social media, the sharing of misinformation could be bad for social media companies. Globig elaborates: “On top of the damaging consequences that the spread of misinformation has on society as a whole, it could also directly harm the platforms themselves, as some users may become too wary of the reliability of a given platform and might thus either leave or switch to a different platform. This would reduce user engagement. On top of that, they may also become subject to restrictions by policymakers if their content is too harmful. Both would not be in the platforms’ interest.”

Globig is keen to point out that social media platforms may not want to implement such interventions to stem misinformation because they care about the “quantity of engagement” as well as the quality. However, “the nice thing about our proposed intervention is that we don’t find a reduction in user engagement, meaning this measure would not come at a cost to the social media platforms. As such it could be easily incorporated into existing social media sites,” she says.

How do I stop sharing misinformation?

  • Don't amplify posts from suspicious sources, even if they make you angry.
  • If you want to share a post, take a screenshot, remove the account’s name, and share the screenshot instead. This way, you won’t amplify the account that posted it.
  • Be conscious of what kind of content you're consuming online. If you find yourself “doom-scrolling," take a break.
  • Make use of tools within the app. On TikTok, the feature "not interested" is quite powerful and allows you to influence your algorithm to an extent. 

Read the Logically Facts guides for media literacy: 

Why we believe online disinformation

Reading beyond the headlines

Steps to stop spreading misinformation

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before