Here’s What Else You Need To Know About Amazon’s New, Scary AI Offering

Written by Lauren DeLisa Coleman
facial recognition
A customer sets up facial recognition on an Apple Inc. iPhone X smartphone during the sales launch at a store in San Francisco, Photographer: Michael Short/Bloomberg© 2017 BLOOMBERG FINANCE LP

Recently the “New York Times” reported that a new study from researchers at M.I.T. Media Lab found that Amazon’s new facial recognition technology entitled Rekognition, exhibited far greater error in correctly identifying the gender of both female and of darker-skinned faces in images than comparable services from companies such as Microsoft and IBM. The study was just formally presented at the AAAI AIES 19 Conference in Hawaii, yesterday. However, there are even greater troubling elements to consider as it pertains to this emerging technology and what it means and says about our present culture, overall.

First, it is particularly interesting that this study be unveiled at nearly the same time that notable media personality Tom Brokaw commented that “Hispanics should work harder at assimilation” for this belief is not so much about accuracy as it is about a standard that is somehow imposed upon an entire collective without the input and concerns of that collective.   The default, without any unequivocal standards of merit, is that of one particular subset of a social group that drives cultural norms without litmus tests for subconscious cultural bias or even awareness.

It should not be surprising, then, that such a systematic approach in thought from human “intelligence” become reflective in that of artificial intelligence, given its creators.

“There’s a tendency in the research and business world to use majority or high-status groups as a stand-in for all people, as the prototype of a human,” explain Andre Cimpian, Associate Professor, Department of Psychology, New York University.  “This renders ethnic minorities and lower-status groups invisible, with potentially disastrous consequences.”

Facial recognition in law enforcement

Cimpian says that the fact that software such as Rekognition which seems to embody such assumptions might actually be used for law-enforcement purposes should be terrifying to us all.  “It would layer machine bias on top of the existing human biases, making the problems it was designed to solve worse,” he says.

However, this is only the very tip of the facial recognition iceberg, and this is where things become particularly problematic.

According to Lauren Rhue, Assistant Professor at Wake Forest University School of Business, the issue is far greater than just image. She says, “This (M.I.T. Media Lab) study is well executed, of course, and highlights the problems associated with large-scale deployment of facial recognition without oversight. This is especially true as law enforcement adopts the software, but it affects other companies who would use facial recognition for their internal needs.”

Loading...

Rhue explains that people fall into one of two categories, men or women, thus it is easy to quantify the bias in the facial recognition and communicate that error rate. Companies can run diagnostics and correct for that error, as IBM and Microsoft have, as it pertains to their facial recognition technology.  “However, one challenge is that these companies are quickly moving into emotional classification and other more subjective areas.  In this area, it is very difficult to identify the ground truth,” says Rhue.

facial recognition
FILE – In this March 12, 2015, file photo, Seattle police officer Debra Pelich, right, wears a video camera on her eyeglasses as she talks with Alex Legesse before a small community gathering in Seattle. While the Seattle Police Department bars officers from using real-time facial recognition in body camera video, privacy activists are concerned that a proliferation of the technology could turn the cameras into tools of mass surveillance. The ACLU and other organizations on Tuesday, May 22, 2018, asked Amazon to stop selling its facial-recognition tool, called Rekognition, to law enforcement agencies. (AP Photo/Elaine Thompson, File)ASSOCIATED PRESS

Imagine the difficulty of scoring for a level of happiness in the face, for example. Rhue asks one to imagine what a happiness score of, say, 50 would actually look like. “In these instances, it can be difficult to assess whether the company has systematic biases. We can look at subgroup consistency. For example, are darker faces viewed as angrier than lighter faces, but not all people are convinced by that measure. If there is a difficulty in the more objective measures of men and women, then we should definitely exercise caution with the adoption of facial recognition in other arenas.”

Rhue says that she recently ran a quick analysis on two pictures from her own sample using Rekognition, and she saw that it suffered from the same bias.  For example, one subject was classified as happy where the other was not.  “If Amazon is selling this software broadly, then we should have concerns that a smiling man is viewed as disgusted and surprised instead of happy and calm, as was the case in my study.”

Indeed, during the recent CES conference, the CMO of Deloitte advocated heavily on stage for usage of physiological application in upcoming marketing strategies such as amount of heat emitted or level of pupil dilation that could soon be detected by your phone in order to help marketers be able to identify whether or not certain images, campaigns, products resonate with the user. This would all be done via various AI elements and would take questions around privacy, and far beyond that of just marketing, into a hyper-alert level in society today.

This is the point at which many see policy coming into play.

“I remain extremely concerned about reports of bias in face recognition and analysis technologies, and recent studies continue to validate my concerns,” Congressman Emanuel Cleaver, II (D-MO) told me via email. “The potential for illegal discrimination and/or unfair practices resulting from any technology that performs less accurately for women and minorities is unsettling.”

facial recognition
The U.S. Capitol Building Photographer: Alex Edelman/Bloomberg© 2019 BLOOMBERG FINANCE LP

Rep. Cleaver says that this concern is particularly salient considering that companies such as Amazon have pitched these technologies to private and public actors to be used in presumably diverse populations.

“Last year, I sent a number of letters to several agencies, including the National Institute of Standards and Technology (NIST), encouraging NIST to endorse industry standards and ethical best practices for the independent testing of demographic-based biases in facial recognition technology. I also called on the Department of Justice to investigate law enforcement’s use of facial recognition technologies and sent a letter to Jeff Bezos, CEO of Amazon, inquiring about the company’s Rekogniton contracts,” explains Rep. Cleaver.

Indeed, as Chairman of the of the House Financial Services Subcommittee on National Security, International Development, and Monetary Policy and member of the Homeland Security Committee, Cleaver says that he plans to focus on this issue and has several thoughts on problem-solving.

He says that companies should ensure that the training data they are using is representative of the appropriate demographic and operational conditions. Rep. Cleaver also says that companies that have chosen not to voluntarily participate in government testing, or have chosen not to publicly produce data on the testing of demographic considerations should provide documentation to customers that explain the capabilities and limitations of their technology.

“Lastly, I caution government actors and private-sector companies against contracting with sellers who cannot demonstrate that their technology has been appropriately tested for accuracy rates across demographic sub-groups,” adds Rep. Cleaver.

Given Amazon’s new HQ2 locations, there is a unique opportunity now to potentially use such offices for further research, diversity partnerships, incubators and much more as they relate to this very sensitive issue –  and benefits – around AI and diversity because one thing we know for certain. Accountability is, without a doubt, the new buzz word in tech for 2019.

This article originally appeared in Forbes.

This image has an empty alt attribute; its file name is Posted-with-permission-of-Forbes-LLC.jpg

About Lauren DeLisa Coleman

Lauren DeLisa Coleman is a digi-cultural trend analyst, author and strategist. Her expertise is deciphering and forecasting power trends, public sentiment within the convergence of pop culture, millennials & emerging tech behavior and analyzing the impact on business, governance. Her sub-specialty is diverse demos, and she is a contributor to media outlets from Forbes to Campaigns & Elections, as well as a guest commentator on MSNBC. As an entrepreneur, she has provided strategic intelligence on projects from Snoop Dogg to Microsoft execs to public policy leaders. She heads Lnk Agency, a hot trend consulting & multimedia company. Her latest e-book is "Americas Most Wanted: The Millennial." You can read her Forbes contributions here: https://www.forbes.com/sites/laurencoleman/#3975218462c5
You can read her Inc column here: https://www.inc.com/author/lauren-delisa-coleman
www.ultralauren.com @ultra_Lauren
http://lnkagency.com/