fbpx

Here’s Why A.I. Has a Race Problem

Here’s Why A.I. Has a Race Problem

 

Technology is not infallible. Apparently, facial recognition software has trouble distinguishing between darker hues. In short, A.I. has a race problem.

Brian Brackeen, a Black businessman behind the facial recognition software Kairos, found this out firsthand–with his own technology. When he was pitching Kairos to a potential customer, the software suddenly stopped working. “Panicked, he tried adjusting the room’s lighting, then the Wi-Fi connection, before he realized the problem was his face. Brackeen is Black, but like most facial recognition developers, he’d trained his algorithms with a set of mostly white faces. He got a white, blond colleague to pose for the demo, and they closed the deal. It was a Pyrrhic victory, he says: ‘It was like having your own child not recognize you,’” Bloomberg reported.

Now, even though Miami-based Brackeen has by adding more Black and brown faces to his image sets, there are still problems in total facial recognition. This is one reason Brackeen has been a loud voice against governments and law enforcement using facial recognition to apprehend subjects. According to Brackeen, too many people of color will be negatively impacted and there is a high risk of misidentification.

This is an industry-wide problem. “The same problem bedevils companies including Microsoft, IBM, and Amazon and their growing range of customers for similar services. Facial recognition is being used to help India’s government find missing children, and British news outlets spot celebrities at royal weddings. More controversially, it’s being used in a growing number of contexts by law enforcement agencies, which are often less than forthcoming about what they’re using it for and whether they’re doing enough about potential pitfalls,” Bloomberg reported.

Brackeen’s scenario has been played out with other systems. “Microsoft, IBM, and China’s Face++ misidentified darker-skinned women as often as 35 percent of the time and darker-skinned men 12 percent of the time, according to a report published by MIT researchers earlier this year. The gender difference owes to a smaller set of women’s faces. Such software can see only what it’s been taught to see,” Bloomberg reported.

According to 2016 study by Georgetown University, almost none of the law enforcement agencies currently using facial recognition require suppliers to meet a minimum threshold for overall accuracy let alone racial disparities. “An inaccurate system will implicate people for crimes they didn’t commit and shift the burden to innocent defendants to show they are not who the system says they are,” says Jennifer Lynch, senior staff attorney for the Electronic Frontier Foundation, an advocate for civil liberties online.

In a more recently published report, MIT Media Lab researcher Joy Buolamwini found that face-analyzing AI works much better for white faces than Black faces. She and co-author Timnit Gebru tested software from Microsoft, IBM, and the Chinese company Megvii, and their findings point directly to A.I.’s race problem. “If the person in the photo was a white man, her study found, the systems guessed correctly more than 99 percent of the time. For Black women, though, the systems messed up between 20 and 34 percent of the time—which, if you consider that guessing at random would mean being right 50 percent of the time, means they came close to not working at all,” Boston Magazine reported.

The findings were compiled into a project called Gender Shades.

“People of color are in fact the global majority. The majority of the world, who I like to call the under-sampled majority, aren’t being represented within the training data or the benchmark data used to validate artificial intelligence systems,” Buolamwini says. “We’re fooling ourselves about how much progress we’ve made.”

A.I.
Brian Brackeen, founder of Kairos AR Inc. Photographer: Anita Sanikop for MOGULDOM