Algorithms Designed To Detect Hate Speech Are Biased Against Black People: Study
In an age where more and more violence and mass shootings are taking place after perpetrators post disturbing things online, social media giants like Facebook, Twitter and YouTube are looking to AI to help identify potential threats. Unfortunately, Artificial Intelligence (AI) designed to detect hate speech may actually trump up racial bias, according to Recode.
One study found that top AI models created to locate hateful speech are 1.5 times more likely to flag tweets by Black people as offensive. It increases to 2.2 times more likely if tweets are written in Ebonics or what is deemed “Black speech.”
Another study had similar findings of bias against Ebonics after examining data sets for approximately 155,800 posts on Twitter. Since what some deem offensive is subjective, the AI algorithms may not always understand the context of certain words and flag the posts, Recode continued.
Listen to GHOGH with Jamarlin Martin | Episode 49: Jamilah Lemieux Part 1:
Jamarlin talks to digital media executive, activist and author Jamilah Lemieux. They discuss her article, “The Power And Fragility Of Working In Black Media” in the Columbia Journalism Review and Lamont Hill being fired by CNN for his comments on Palestine. They also discuss whether Michelle Obama’s words on Rev. Jeremiah Wright in her book “Becoming” were a false equivalence.
Add to that the fact that AI has built-in bias against Blacks and people of color to begin with and the issue is amplified. New York Rep. Alexandria Ocasio-Cortez highlighted this problem earlier this year when speaking with writer Ta-Nehisi Coates.
“Algorithms are still made by human beings, and those algorithms are still pegged to basic human assumptions,” she told writer Ta-Nehisi Coates at the annual MLK Now event, Vox reported. “They’re just automated assumptions. And if you don’t fix the bias, then you are just automating the bias.”
The latest findings underscore and give credence to many Black activists’ claims that Facebook censors them more harshly than white people. Because AI has to learn to do its job, if those inputting content are biased, it will be also.
“What we’re drawing attention to is the quality of the data coming into these models,” Cornell University researcher Thomas Davidson told Recode. “You can have the most sophisticated neural network model, but the data is biased because humans are deciding what’s hate speech and what’s not.”