fbpx

Why An Algorithmic Solution To White Supremacy Won’t Work On Twitter

Why An Algorithmic Solution To White Supremacy Won’t Work On Twitter

supremecy

Twitter has been promising and promising to crack down on hate speech on the online platform. It vowed to delete accounts that promote white supremacy and other hate speech.

But a Twitter employee who works on machine learning has said that the proactive, algorithmic solution to cut out white supremacy on Twitter would also snare Republican politicians.

At a Twitter all-hands meeting on March 22, an employee asked a blunt question, Vice reported. “Twitter has largely eradicated Islamic State propaganda off its platform. Why can’t it do the same for white supremacist content?”

An executive answered the question by saying Twitter follows the law. Then a technical employee who works on machine learning and artificial intelligence issues continued the answer.

Listen to GHOGH with Jamarlin Martin | Episode 12: Keenan Beasley Jamarlin talks to Keenan Beasley, co-founder and managing partner of New York digital analytics company BLKBOX. The Westpoint grad and former P&G brand manager talks about his early mistakes, how NY and Silicon Valley investors differ, and the advantages of getting experience in an industry before trying to disrupt it.

The employee explained that with all content filters, there is a tradeoff. If an online platform aggressively restricts ISIS content, for example, it can also red-flag accounts that are innocent of dangerous speech as well, such as Arabic language broadcasters.

That employee admitted that Twitter “hasn’t taken the same aggressive approach to white supremacist content because the collateral accounts that are impacted can, in some instances, be Republican politicians,” Vice reported.

It seems that content from Republican politicians could also be captured by algorithms aggressively removing white supremacist material.

“On one hand, the company has a hateful conduct policy that explicitly bans tweets meant to incite fear of, or violence toward, protected groups. On the other, any number of prominent white nationalists can still be found on the platform, using Twitter’s viral sharing mechanics to grow their audiences and sow division,” The Verge reported.

This is all from an employee and not an official Twitter explanation. In fact, Twitter told Motherboard that this “is not (an) accurate characterization of our policies or enforcement—on any level.” Still, the employee comment begs the questions, is this why Twitter hasn’t really cracked down as aggressively as it could on white supremacy on its platform — to protect Republican Twitter users?

“I haven’t seen a legit ISIS supporter on Twitter who lasts longer than 15 seconds for two-and-a-half years,” Amarnath Amarasingam, an extremism researcher at the Institute for Strategic Dialogue, told Motherboard. “Most people can agree a beheading video or some kind of ISIS content should be proactively removed, but when we try to talk about the alt-right or white nationalism, we get into dangerous territory, where we’re talking about (Iowa Rep.) Steve King or maybe even some of Trump’s tweets, so it becomes hard for social media companies to say all of this ‘this content should be removed.’”