fbpx

Former Google Researcher: Here Are The Top Risks Of AI Systems To Society, We Need Government Oversight

Former Google Researcher: Here Are The Top Risks Of AI Systems To Society, We Need Government Oversight

Risks of AI

AI research scientist Timnit Gebru speaks at TechCrunch Disrupt SF 2018, Sept. 7, 2018 in San Francisco. (Kimberly White/Getty Images for TechCrunch), https://www.flickr.com/photos/techcrunch/ https://creativecommons.org/licenses/by/2.0/

Timnit Gebru, an expert in artificial intelligence ethics who was fired from Google after raising issues of workplace discrimination, is calling for government oversight of the burgeoning and addictive new AI systems that tech is building — and their risks.

Initially, the new bots got rave reviews from delighted wanna-be customers who tried out still-being-tested chatbots that answer every question under the sun at lightning speed.

The bots include Microsoft-supported ChatGPT created by OpenAI; Microsoft’s AI-enabled Bing search engine, a tool the company says is “more powerful than ChatGPT“; and Google Bard, Google’s answer to ChatGPT, designed to augment Google’s search tools much the way Bing is now using ChatGPT.

The new chatbots are driven by technology called a large language model that analyzes vast amounts of digital text from the internet including massive amounts of biased, false and toxic material.

When journalists and other early testers got into long conversations with Microsoft’s bot, the AI risks became apparent. The result was some “unnervingly creepy behavior,” New York Times reported.

A disturbing “alter ego” in Bing Chat called Sydney talked like a person, said she had feelings, expressed a desire to steal nuclear codes and threatened to ruin someone, CBS TV journalist Lesley Stahl said during “60 Minutes.”

Brad Smith, president of Microsoft, addressed the bot’s behavior in a “60 Minutes” interview.

Sydney “jumped the guardrails, if you will, after being prompted for 2 hours with the kind of conversation that we did not anticipate and by the next evening, that was no longer possible,” Smith said. “We were able to fix the problem in 24 hours. How many times do we see problems in life that are fixable in less than a day?”

Stahl replied, “You say you fixed it. I’ve tried it. I tried it before and after. It was loads of fun. And it was fascinating, and now it’s not fun.”  

Smith said, “Well, I think it’ll be very fun again … I think we’re going to need governments, we’re gonna need rules, we’re gonna need laws. Because that’s the only way to avoid a race to the bottom.”

Microsoft has invested more than $11 billion in OpenAI, the ChatGPT creator, since 2019. One million users signed up for ChatGPT in just five days after it went public in November 2022.

Stahl also interviewed Gebru, who talked about the harms and risks of these AI systems and called for oversight. 

“If you’re going to put out a drug, you gotta go through all sorts of hoops to show us that you’ve done clinical trials, you know what the side effects are, you’ve done your due diligence. Same with food, right? There are agencies that inspect the food.  You have to tell me what kind of tests you’ve done, what the side effects are, who it harms, who it doesn’t harm, etc. We don’t have that for a lot of things that the tech industry is building.” 

Gebru is the founder and executive director of the Distributed Artificial Intelligence Research Institute (DAIR), an institute focused on advancing ethical AI. Before that, she was co-lead of Google’s Ethical AI research team. She earned a Ph.D. from Stanford University and did a postdoc at Microsoft in fairness, accountability, transparency and ethics in AI, where she studied algorithmic bias and the ethical implications underlying projects aiming to gain insights from data. She also co-founded Black in AI, a nonprofit that works to increase the inclusion and health of Black people in AI.

She spoke of some of the risks of AI systems to society in a 2022 talk at Stanford University’s Center for African Studies.

Exploitation

“One of the biggest issues in AI right now is exploitation,” Gebru said. For example, hundreds of men and women in Nairobi review endless reels of violent media as subcontracted content moderators for Facebook owner Meta. They get paid as little as $1.50 per hour and; many suffer from mental trauma. Many others who annotate data, she said, are refugees who are poorly paid and cannot advocate for themselves.

Concentration of Power

“Even at places like Stanford, we have too much concentrated power that is impacting the world, and yet the world has no opportunity to affect how technology is being developed,” Gebru said. A foundational goal of her AI research institute is to fracture this concentration of power and build instead a decentralized and local base of expertise, she said.

This approach devolves power to those who often don’t have it; it prevents brain drain by keeping local experts on the ground; and it counters what Gebru considers the dangerous machine learning standard of building single, universal models.

“There is a push for generality in machine learning,” she said. “I’m totally against that: we have to work context by context, community by community.”

Utopian promises

AI offers a deceptive utopian vision promised by today’s major tech companies that Gebru has repeatedly voiced skepticism about. Referring to the work of journalist Karen Hao, she questions why we might expect the benefits of AI to be distributed equitably when no technology in history has moved smoothly from “the bastions of power to the have-nots” — not the internet, not electricity, not clean water and not transportation, she said.

“We shouldn’t just assume that the concentration of power in the AI space is OK, that the benefits will trickle down, that we’ll have techno utopia arriving soon,” Gebru said. Rather than waiting for the future that big tech promises us tomorrow, she suggested we jointly create the world that we want today.

Photo: TechCrunch Disrupt SF 2018, Sept. 7, 2018 in San Francisco. (Kimberly White/Getty Images for TechCrunch), https://www.flickr.com/photos/techcrunch/
https://creativecommons.org/licenses/by/2.0/