Facebook advertising is increasingly the focus of international attention. Last week, the globally dominant ad sales platform found $100,000 worth of ads placed during the 2016 presidential election season by “inauthentic” accounts that appeared to be affiliated with Russia.
It is likely that special counsel Robert Mueller’s legal team is exploring whether the tech company has criminal liability for political ads sold to a Russian firm, Slate reported.
Tech companies are under growing scrutiny for helping facilitate white supremacy and online hatred.
“Facebook’s ad network, in particular, still seems to embody an “anything goes” approach to targeting, despite fixing a few high-profile problems.”
This week, we learned that Facebook’s self-service ad-buying platform enabled advertisers to sell to the news feeds of almost 2,300 people interested in the topics of “Jew hater,” “How to burn jews,” or, “History of ‘why jews ruin the world.’”
When ProPublica’s reporters were in the process of typing “Jew hater,” Facebook’s ad-targeting tool went so far as to recommend related topics such as “how to burn Jews” and “History of ‘why Jews ruin the world,’” Slate reported.
Facebook’s advertising categories were created by an algorithm rather than by people, according to ProPublica, a New York City-based nonprofit newsroom that produces investigative journalism in the public interest.
After one person died Aug. 12 in violent Charlottesville protests led by Nazis and right-wing groups, Facebook and other tech companies promised to be more vigilant in monitoring hate speech.
“There is no place for hate in our community,” Facebook CEO Mark Zuckerberg wrote at the time. He promised to keep a closer eye on hateful posts and threats of violence on Facebook. “It’s a disgrace that we still need to say that neo-Nazis and white supremacists are wrong — as if this is somehow not obvious.”
But Facebook didn’t get around to it yet or missed a few things in the ad-buying process.
In all likelihood, the ad categories that we spotted were automatically generated because people had listed those anti-Semitic themes on their Facebook profiles as an interest, an employer or a “field of study.” Facebook’s algorithm automatically transforms people’s declared interests into advertising categories, ProPublica reported.
This week, ProPublica tested the anti-Semitic ad categories, paying $30 to target the three groups with three promoted posts in which a ProPublica article or post was displayed in their news feeds. Facebook approved all three ads within 15 minutes:
After we contacted Facebook, it removed the anti-Semitic categories … and said it would explore ways to fix the problem, such as limiting the number of categories available or scrutinizing them before they are displayed to buyers.
This isn’t the first time ProPublica has tested Facebook’s ad categories. In 2016, ProPublica bought an ad in Facebook’s housing categories and blocked it from being shown to African-Americans, Hispanics and Asian-Americans. This raised the question of whether such ad targeting violates laws against discrimination in housing advertising.
Civil rights lawyer John Relman said at the time, “This is horrifying. This is massively illegal. This is about as blatant a violation of the federal Fair Housing Act as one can find.”
After ProPublica reported it, Facebook built a system that it said would prevent such ads from being approved.
This is a screenshot of ProPublica’s ad buying process this week on Facebook:
Slate tried something similar on Thursday with an ad targeting “Kill Muslimic Radicals,” “Ku-Klux-Klan.” More than a dozen hateful groups were approved. “In our case, it took Facebook’s system just one minute to give the green light,” Slate reported.
Like many tech companies, Facebook has had a hands-off approach to its advertising business. Traditional media companies, on the other hand, select the audiences they offer advertisers in a process called mediated communication.
Facebook generates its ad categories automatically based on what users explicitly share with Facebook and what they implicitly convey through their online activity, ProPublica reported:
Traditionally, tech companies have said it’s not their role to censor the internet or to discourage legitimate political expression. In the wake of the violent protests in Charlottesville by right-wing groups that included self-described Nazis, Facebook and other tech companies vowed to strengthen their monitoring of hate speech.