Law Enforcement Should Not Use AI Algorithms To Make Decisions About Jailing People, Tech Consortium Says

Written by Ann Brown
AI Algorithms

There have been various studies on the dangers of law enforcement using artificial intelligence algorithms and now several tech giants are weighing in, agreeing that their own technology should not be used by police due to potential bias and transparency issues.

A consortium that includes major tech firms has warned against using AI algorithms for law enforcement when making decisions about imprisoning people.

Listen to GHOGH with Jamarlin Martin | Episode 17: Boyce Watkins Jamarlin talks with Dr. Boyce Watkins about building The Black Business School, and how he deals with his negro critics and their victimology teachings. They also discuss the #MeToo movement, racial bias in Facebook’s content policing, and Boyce‘s successful marketing strategy.

In a newly published report, the Partnership on AI said that current algorithms aimed at helping police determine who should be granted bail, parole or probation, and which help judges make sentencing decisions, “are potentially biased, opaque, and may not even work,” Bloomberg reported.

The consortium includes Facebook Inc., Microsoft Corp., Alphabet Inc.’s Google and DeepMind, Inc., Apple Inc., and International Business Machines Corp., as well as academic researchers.

There is lots of evidence, according to the consortium, of the failures of AI. In 2016, an investigation by ProPublica found that an algorithm was twice as likely to incorrectly label Black defendants as being at higher risk than whites, Bloomberg reported.

In 2018, Microsoft urged for clearer laws around the use of facial recognition technology amid concerns that such software could be used by police and governments in ways that violate civil liberties. Amazon later chimed in, saying it also had concerns about the use of the technology which it offers, Bloomberg reported.

There is such a backlash against AI algorithms, that a San Francisco government committee moved the city closer toward approving a complete ban on government uses of face recognition technology. If it passes, San Francisco will become the first U.S. city to do so.

Some observers have said that money is behind the push for law enforcement to use AI. “Those in power are, who mostly stand to profit from it. This is either through making/selling the gear, or using the tech to reduce headcount by replacing folks. This isn’t about society or even civilization, it’s about money and power,” technology writer S. A. Applin wrote in Fast Company.

Still, money isn’t the only reason. Applin argues that fear, especially as cities are growing more diverse, is another major reason.

“But money and power aren’t the only reasons behind the push to adopt facial recognition. Cooperation is how humans have managed to survive as long as we have, and the need to categorize some people as ‘the other’ has been happening since there have been humans. Unfortunately, misconceptions and speculations about who some of us are and how we might behave have contributed to fear and insecurity among citizens, governments, and law enforcement,” Applin wrote.

Applin added: “Today, those fearful ideas, in combination with a larger, more mobile, more diverse population, have created a condition by which we know of each other, but do not know each other, nor do we often engage with each ‘other’ unless absolutely necessary. Our fears become another reason to invest in more “security,” even though, if we took time to be social, open, and cooperative in our communities, there would be less to fear, and more security as we looked out for each other’s well being.”