Artificial Intelligence Is Inheriting Human Bias. Here’s Why Diversity And Inclusion Matter

Artificial Intelligence Is Inheriting Human Bias. Here’s Why Diversity And Inclusion Matter

We are prejudiced and we are teaching artificial intelligence how to be prejudiced.

Diversity and inclusion matter – from who designs AI to who sits on the company boards.

Otherwise we risk building machine intelligence that mirrors a narrow and privileged vision of society with its old familiar biases and stereotypes, the New York Times reported in an article entitled, “Artificial Intelligence’s White Guy Problem.”

As machines get closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed in the patterns of language use, new research shows, according to The Guardian.

The research, published in an April 17 Science journal report, focuses on a machine learning tool known as “word embedding,” which is already transforming the way computers interpret speech and text.

Some people say that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

“A lot of people are saying this is showing that AI is prejudiced,” said Joanna Bryson, a computer scientist at the University of Bath, who co-authored the Science journal report. “No. This is showing we’re prejudiced and that AI is learning it.

“The findings raise fears of existing social inequalities and prejudices being reinforced in new and unpredictable ways as more decisions affecting our everyday lives are made by machines.

Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to, New York Times reported.

Are you interested in getting smart on Life Insurance?
Click here to take the next step

One example is Hewlett-Packard‘s web camera software, which had difficulty recognizing people with dark skin tones:

This is fundamentally a data problem. Algorithms learn by being fed certain images, often chosen by engineers, and the system builds a model of the world based on those images. If a system is trained on photos of people who are overwhelmingly white, it will have a harder time recognizing nonwhite faces.

Another example, reported by ProPublica, shows that software used widely to assess the risk of recidivism in criminals was twice as likely to mistakenly flag black defendants as being at a higher risk of committing future crimes. It was also twice as likely to incorrectly flag white defendants as low risk, NYTimes reported:

Histories of discrimination can live on in digital platforms, and if they go unquestioned, they become part of the logic of everyday algorithmic systems. Another scandal emerged recently when it was revealed that Amazon’s same-day delivery service was unavailable for zip codes in predominantly black neighborhoods. The areas overlooked were remarkably similar to those affected by mortgage redlining in the mid-20th century. Amazon promised to redress the gaps, but it reminds us how systemic inequality can haunt machine intelligence.

Dominique Davis is an applied research scientist at Dimensional Mechanics in Bellevue, Washington. The company makes artificial intelligence more accessible to organizations of all sizes.

With a background in cognitive psychology, Davis brings a diversity and inclusion lens to her work in artificial intelligence, according to a report by Jared Karol in Tech Inclusion:

“Diversity is absolutely critical to my work in artificial intelligence,” Davis said. “We’re already seeing unfortunate cases where people of color are misclassified by artificial intelligence systems, and artificial intelligence systems are inheriting human bias.

As a cognitive scientist on the team, Davis runs cognitive studies on humans and integrates that data into the systems. She also solves customer data challenges by helping them build AI models.

“Rather than finding people to fit the mold, we need to change the mold when necessary,” Davis said.

People working on AI need to check themselves on their work by promoting the use of representative data sets and building inclusive teams, according to Davis. “Not only that. We need to build AI systems that minimize human bias in a variety of industries – in the classroom, in the doctor’s office, and in the courtroom.”

Davis’s personal experiences growing up have heavily shaped her conviction about why diversity and inclusion are so important.

“I come from a mixed race background and can’t imagine a world that’s not diverse and inclusive,” she says. She believes in celebrating unique walks of life and knows how much each of us can contribute if we’re only given a seat at the table.

Now living in the Seattle area, Davis is known for her work in AI and also as a diversity and inclusion advocate, working with the Seattle chapter of the Anita Borg Institute as a community engagement lead to connect women technologists around the world.

“Our mission is to strive to create a tech workforce that reflects who we build for,” she says. “We also provide professional and technical resources for women in the Puget Sound region.”

Diversity doesn’t matter without inclusion, Davis said. “We need accurate metrics for inclusion. You can measure diversity in terms of company demographics but inclusion is a feeling – and it’s very difficult to capture on paper. We need to find a way to accurately measure inclusion and ensure the measure lines up nicely with diversity.”

Like all technologies before it, artificial intelligence will reflect the values of its creators, New York Times reported.

So inclusivity matters – from who designs it to who sits on the company boards and which ethical perspectives are included. Otherwise, we risk constructing machine intelligence that mirrors a narrow and privileged vision of society, with its old, familiar biases and stereotypes.