Like San Francisco, Chicago Should Ban Facial Recognition Technology
Facial recognition technology has been used by cities across the U.S., however, it is often for controversial purposes. A landlord in Brooklyn had planned to replace the key fobs with facial recognition. Amazon was reported to have sold its facial recognition software to immigration officials. Police departments in both Chicago and Detroit have purchased real-time face surveillance technology.
However, several cities are pushing back on the use of this technology. Leading the pack, San Francisco became the first place in the U.S. to ban the city’s purchase and use of facial recognition surveillance technology with its “Stop Secret Surveillance” ordinance. Similar laws are being discussed in Oakland, CA; Somerville, MA and elsewhere.
Following San Francisco’s example, the city of Chicago should push a similar measure to protect its communities from this intrusion.
Many critics argue that facial recognition technology invades our privacy and diminishes our due process rights. One of the most pernicious examples of this is China using facial recognition technology to surveil its Uyghur Muslim minority. Its potential for mass surveillance is frightening, and according to experts, there is no place where this could be a more plausible reality than Chicago.
With roughly 10,000 accessible surveillance cameras, former U.S. Homeland Security Secretary Michael Chertoff said he didn’t think “there is another city in the U.S. that has an extensive and integrated camera network as Chicago has.”
In a report by Georgetown Law’s Center on Privacy & Technology, researchers and scholars wrote: “If cities like Chicago equip their full camera networks with face recognition, they will be able to track someone’s movements retroactively or in real-time, in secret, and by using technology that is not covered by the warrant requirements of existing state geolocation privacy laws.” When we add in things like automated license readers and body cameras warn by police, the technical infrastructure for a “surveillance city” is already in place.
These prospects should alarm all Chicagoans regardless of race or gender, but researchers have noted some of the racial biases seen in facial recognition (its accuracy goes down when identifying women and those with darker skin). These algorithmic biases could lead to a variety of injustices, particularly false identification and arrests, which would disproportionately affect black and brown Chicagoans. At the Congressional Oversight Committee’s hearing on facial recognition, law professor Cedric Ferguson said that “surveillance is both a civil rights issue and a civil liberties issue” and should be regulated with “racial justice in mind.”
Using this technology also has a chilling effect on freedom of expression and our right to protest. While we don’t have an example as dystopian as China’s in America, we do know that police used the Maryland Image Repository System to monitor protestors during the riots in Baltimore. Imagine if CPD had used this technology at different rallies and protests for LaQuan McDonald? And if they did, how and when would the public find out?
The technology in and of itself is not inherently “bad”–there is a lot positive potential in the field of medicine, as facial mapping and recognition programs can be used to spot strokes and other health issues that can be detected through the face.
Police departments often argue that facial recognition surveillance technology is a useful tool in pursuing evasive criminals. It could help officers locate and catch carjackers but also help them find missing persons or children during Amber alerts. A CPD spokesperson said that facial recognition systems are “seldom used” by the department due to “technological limitations.” But as Attorney Karen Sheley of the ACLU of Illinois said to WTTW, this would imply that CPD is either wasting money on tech they can’t use or they are not being upfront about it.
Regardless, there are too many issues that have to be worked out before this technology has ethical and tenable use in law enforcement. Police departments don’t fully understand the problems and negative implications of that use, and the companies who make facial recognition software resist oversight by asserting their proprietary rights. These systems of surveillance are often so opaque that someone accused of a crime would never find out that they were identified via facial recognition. And even in order for this technology to be used to catch those looking to commit crimes, by definition, it can only work if all of our faces are in a database.
It’s reasonable for people to assert their right to privacy and at least to know the manner in which they are being watched. Facial recognition surveillance takes that right away from them. Using surveillance tech on vulnerable populations could also be seen as antithetical to restoring community trust in policing, a goal outlined in the 2017 consent decree.
The allure of its promise doesn’t outweigh its potential for misuse, especially in Chicago. At the present moment, the risks of getting in wrong outweigh the benefits of using it.