AI Biases and the Global South

AI biases and the global south

By: anmol irfan&
April 28 2022

Share Article: facebook logo twitter logo linkedin logo
AI biases and the global south

In late March, Professor Nida Kirmani, a feminist sociologist and professor at Lahore University of Management Sciences in Pakistan logged on to Twitter as usual. She quickly realized something was awry: other accounts with her name were now gaining traction on the platform. As a vocal feminist voice online, Professor Kirmani was no stranger to online vitriol, but now her trolls had taken things a step further, creating parody accounts under her name. These accounts were twisting her words and opinions and attacking her personal life, claiming the tweets were simply jokes.

Professor Kirmani, despite her large following and academic qualifications, is still not verified on Twitter. When faced with impersonators, this lack of verification made things much harder. She has spoken out multiple times on how users within the Western world have a much easier time being verified on social media platforms.

“Being denied verification means that some people genuinely mistook the parody accounts for my genuine account. This is why I requested verification," shares Professor Kirmani. "The parody accounts were very upsetting, but I also realized that the more attention I gave them, the more fuel they received. I do think it should have been treated more seriously by Twitter as an organized campaign." 

Kirmani’s parody accounts still exist, despite many reporting them to Twitter. The extent of the trolling against Professor Kirmani was such that, when I tweeted in support of her — and without even tagging her account — I was immediately met with vitriol in the replies. Unfortunately, when social media lacks regulation and treats hate speech or targeted attacks in the same way as free speech, these kinds of vitriolic replies are common.

Who decides what hate speech is?

Dr. Niki Cheong is a former journalist and lecturer in Digital Culture and Society at King’s College London and researches disinformation practices, particularly in South East Asia. He points out that the power lies with tech companies to decide which hate speech is worthy of removal. "I think tech companies are trying to show us that they are doing something,” he explains, “but so many things are still not seen as hate speech. My concern is that these companies are getting to set the standard for hate speech, and that seems arbitrary."

These companies are getting to set the standard for hate speech, and that seems arbitrary.

Indeed, Professor Kirmani’s experiences are far from unique when it comes to the online lives of people of color around the world. Despite the prevalence of information disorder in countries in the Global South — particularly those with political or social instability — research focused on the Western world dominates, meaning that tech companies are still far behind in understanding how to better protect users from these regions.

Unfortunately, tech companies can sometimes deliberately obfuscate, as disinformation expert and founder of #ShePersisted Lucina Di Meco points out. 

"We don't know enough about how social media platforms and their algorithms work. We only know what they tell us and what we learn from whistleblowers, but this can't be an accountability mechanism,” she says, adding, “When they tell us they've taken down a certain number of cases of hate speech, we don't know how many they've let slip or ignored."

But what makes this ignorance far more dangerous is just how far big tech companies are willing to go so they don’t have to answer for their actions. In 2020, Google fired the ethical AI researcher Dr. Timnit Gebru after she wrote a paper questioning the ethics of the company's language systems. Her research pointed out that these systems generated biased and hateful language due to the data it had been given to study. 

Dr. Gebru’s research highlighted how the ethics and biases of AI are defined by the data sets the technology is fed, which are often influenced by a westernized lens — something that can lead to AI biases relating to race and gender.

But the issue goes deeper. As software engineer and co-founder of fact checking platform Alt News, Pratik Sinha, points out, "Facebook doesn’t fact check politicians because they say politicians are already criticized enough but that's a very myopic San Francisco view. That doesn't happen in the rest of the world,” says Sinha, referring to many authoritarian countries where state relationships and political alliances with media mean political flaws are swept under the rug.

There is also the issue of simply not having enough boots on the ground. "Look at the ratio of employees to citizens in the U.S. vs the ratio of employees to citizens in South Asia,” Sinha explains. “They need to be more transparent and more accountable, and need to work with different stakeholders like local fact checking bodies.”

Non-English languages

Online fact checking and hate speech filters are further biased due to language barriers. Despite the fact that a majority of the global population are non-English speakers, tech companies have few datasets for non-English languages. 

Gratiana Fu, a data scientist, was inspired by Dr. Gebru’s experience to look more closely at how big tech companies control our everyday interactions with information. 

Despite the fact that a majority of the global population are non-English speakers, tech companies have few datasets for non-English languages.

“There is still a huge gap in technologies that can process and analyze non-English text, an issue for platforms like Facebook where nearly two-thirds of users use a language other than English. Communities outside of the western sphere speak hundreds of different languages and that’s a massive barrier to using algorithms for those people,” Fu explains.

With lesser checks and balances for information in non-English languages being shared, Fu also points out that there’s bound to be confusion in marking misinformation as true and vice versa. 

Dr. Cheong says that he was inspired to start his work on disinformation 11 years ago after coming across patterns and trends on Twitter where multiple accounts would use the same hashtag and phrases numerous times. More than a decade later, the same tools are being used to silence vulnerable voices, but social media companies have only recently that have even started talking about it.

"In Malaysia, we've been dealing with this issue of cyber troopers for almost two decades," he explains, adding that this is state-linked manipulation of information, as these are people to whom state parties may send talking points to be championed on online platforms.

Because power dynamics are skewed in favor of state parties and elites, this lack of checks and balances means that most of the pressure is felt by marginalized groups, especially women in the Global South. “This is the very first generation in which women in so many countries are considering leadership roles, either in politics or social engagement, speaking out for their rights,” Di Meco says, adding, “A lot of mud is being thrown at women leaders which leaves space for autocrats and oligarchs and corrupted leaders who have an opportunity to entrench their power even with fewer checks and balances. The younger generation and women are often outsiders of politics who are keeping them in check, so if this is getting lost, they'll stay in power for longer."

While discussions concerning biases in AI often lead to simplistic arguments that claim the tech or databases behind it should be fixed, there is a murkier reality: who is shaping these “dysfunctional” algorithms, as Sinha describes them. Pointing out the relationships between big tech companies have with local governments or businesses in certain countries, Sinha talks about how at the end of the day, tech companies exist to make a profit. "In countries like India, Pakistan, Bangladesh with semi-authoritarian governments, you need to be friends with the government to do business. There's a lot of hate speech in India — and these people are not being de-platformed as quickly as they should.”

As Fu points out, "Hate speech is dangerous. Many studies have been conducted that examine the effects of hate speech and have found associations with depression, anxiety, and PTSD. People who pass hate speech off as ‘jokes’ invalidate the real-world impacts of those attacks." 

She adds, “As for how companies can better protect marginalized users, there is so much to do — better content moderation (i.e. hiring content moderators that are fluent in local languages and paying them fair wages, less reliance on AI identifying and removing problematic content) and greater transparency into the algorithms are two that immediately come to mind.”

Would you like to submit a claim to fact-check or contact our editorial team?

0
Global Fact-Checks Completed

We rely on information to make meaningful decisions that affect our lives, but the nature of the internet means that misinformation reaches more people faster than ever before