Do The Algorithms A Favor. ‘Invest More In Diversifying The AI Field Itself’: McKinsey
There is an ongoing debate on whether or not bias demonstrated by artificial intelligence (AI) can be fixed. A new study entitled Notes from the AI frontier: Tackling bias in AI (and in humans)” says there is hope — if more money is put into diversifying the world of AI.
The report found that AI actually could help humans make fairer decisions — that is if there is work done to create fairness in AI systems as well.
“Notes from the AI frontier: Tackling bias in AI (and in humans)” takes a look at where algorithms can help decrease disparities caused by human biases.
Listen to GHOGH with Jamarlin Martin | Episode 12: Keenan Beasley Jamarlin talks to Keenan Beasley, co-founder and managing partner of New York digital analytics company BLKBOX. The Westpoint grad and former P&G brand manager talks about his early mistakes, how NY and Silicon Valley investors differ, and the advantages of getting experience in an industry before trying to disrupt it.
“Humans are also prone to misapplying information. For example, employers may review prospective employees’ credit histories in ways that can hurt minority groups, even though a definitive link between credit history and on-the-job behavior has not been established. Human decisions are also difficult to probe or review: people may lie about the factors they considered, or may not understand the factors that influenced their thinking, leaving room for unconscious bias,” McKinsey reported.
On the positive side, “AI can reduce humans’ subjective interpretation of data, because machine learning algorithms learn to consider only the variables that improve their predictive accuracy, based on the training data used.”
There has been evidence that algorithms could help reduce racial disparities in the criminal justice system.
But on the flipside, AI can also be biased because of the humans behind the data. And this can be detrimental to Black people.
“Julia Angwin and others at ProPublica have shown how COMPAS, used to predict recidivism in Broward County, Florida, incorrectly labeled African-American defendants as ‘high-risk’ at nearly twice the rate it mislabeled white defendants. Recently, a technology company discontinued development of a hiring algorithm based on analyzing previous decisions after discovering that the algorithm penalized applicants from women’s colleges. Work by Joy Buolamwini and Timnit Gebru found error rates in facial analysis technologies differed by race and gender. In the ‘CEO image search,’ only 11 percent of the top image results for ‘CEO’ showed women, whereas women were 27 percent of US CEOs at the time,” Mckinsey reported.
There has been research done on how to correct bias in AI.
“Several approaches to enforcing fairness constraints on AI models have emerged. The first consists of pre-processing the data to maintain as much accuracy as possible while reducing any relationship between outcomes and protected characteristics, or to produce representations of the data that do not contain information about sensitive attributes,” Mckinsey reported.
Another approach is to use post-processing techniques, which will transform some of the model’s predictions after they are made in order to create a fairness constraint. Also, fairness constraints can be imposed on the optimization process itself.
Still, the process of correcting AI bias “can actually harm black, gay, and transgender people,” Vox reported.
There have been incident after incident of AI bias.
“Amazon abandoned a recruiting algorithm after it was shown to favor men’s resumes over women’s; researchers concluded an algorithm used in courtroom sentencing was more lenient to white people than to Black people; a study found that mortgage algorithms discriminate against Latino and African American borrowers,” Vox reported.
Then there was the facial recognition technology used by Google’s image-recognition system that labeled African Americans as “gorillas” in 2015.
Obviously, AI technology has a long way to go.