fbpx

Why Some Experts Say More Advanced Artificial Intelligence Can Attack Humans: 3 Things To Know

Why Some Experts Say More Advanced Artificial Intelligence Can Attack Humans: 3 Things To Know

artificial intelligence attack

Image: Message from artificial intelligence by Michael Cordedda, https://www.flickr.com/photos/mikeycordedda/ https://creativecommons.org/licenses/by/2.0/

Citing a potential danger to humanity, more than 31,000 people including tech leaders and researchers signed an open letter on March 22 urging a six-month moratorium on developing artificial intelligence systems more powerful than GPT-4.

Twitter owner Elon Musk and Apple co-founder Steve Wozniak were among those who signed.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked. “Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Here’s why some experts say more advanced AI can attack humans: 3 things to know.

‘Possibility that AIs will run out of control’

Billionaire Microsoft co-founder and philanthropist Bill Gates expressed concern in his blog, Gates Notes, that AI could take over the world.

AI is not a flawless system, he explained, because “AIs also make factual mistakes and experience hallucinations.”

There’s the possibility that AIs will run out of control, Gates wrote. “Could a machine decide that humans are a threat, conclude that its interests are different from ours, or simply stop caring about us? Possibly.”

Gates called on governments and philanthropy, saying they “will need to play a major role in ensuring that (AI) reduces inequity and doesn’t contribute to it.”

There are threats posed by humans armed with AI, Gates wrote. “Like most inventions, artificial intelligence can be used for good purposes or malign ones. Governments need to work with the private sector on ways to limit the risks.”

Historian Niall Ferguson, author of 17 books including “Doom: The Politics of Catastrophe,” recently wrote that the AI doomsdayists, including those who signed the petition for a six-month moratorium on AI development, should be taken seriously.

In an interview on The Spectator, Ferguson cited Lou Cixin’s book, “The Dark Forest,” in which he argues that “if there’s any superior intelligence in the universe, if it discovers us, it will immediately eradicate us just in case we might eradicate it.”

Eliezer Yudkowsky, one of the earliest researchers to analyze the prospect of powerful artificial intelligence, wrote in Time magazine that “pausing AI developments isn’t enough. We need to shut it all down.”

“To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails,” Yudkowsky wrote. “Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”

An existential threat

There’s a chance AI “goes wrong and destroys humanity,” Musk told CNBC’s David Faber in a May 16 interview. Musk has said in the past that he thinks AI represents one of the “biggest risks” to civilization.

Artificial intelligence could pose existential risks and governments need to know how to make sure the technology is not “misused by evil people,” former Google CEO Eric Schmidt warned at The Wall Street Journal’s CEO Council Summit in London.

Schmidt defined existential risk “as many, many, many, many people harmed or killed,” , CNBC reported.

He described scenarios, “not today, but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues, or discover new kinds of biology. Now, this is fiction today, but its reasoning is likely to be true. And when that happens, we want to be ready to know how to make sure these things are not misused by evil people.”

Sam Altman, CEO of OpenAI which developed ChatGPT, said in March that he is a “little bit scared” of artificial intelligence and worries about authoritarian governments developing the technology.

Artificial intelligence poses “an existential threat to humanity” akin to nuclear weapons in the 1980s and should be reined in until it can be properly regulated, an international group of doctors and public health experts warned May 9 in BMJ Global Health. The experts, including International Physicians for the Prevention of Nuclear War, said AI’s ability to rapidly analyze sets of data could be misused for surveillance and information campaigns to “further undermine democracy by causing a general breakdown in trust or by driving social division and conflict, with ensuing public health impacts,” Axios reported.

They raised concerns about the development of future weapons systems capable of locating, selecting and killing “at an industrial scale” without the need for human supervision. They cited AI’s potential impact on jobs. “While there would be many benefits from ending work that is repetitive, dangerous, and unpleasant, we already know that unemployment is strongly associated with adverse health outcomes and behavior,” they said.

The fear-mongering may be a public relations ploy

Skeptics might look at CEOs like Musk, with potential commercial interests at stake in slowing OpenAI’s development of GPT-5, and dismiss the Pause AI Open Letter as little more than a public relations ploy, wrote Becky Bracken, editor of Dark Reading, a news site for cybersecurity professionals.

“We have to be a little suspicious of the intentions here — many of the authors of the letter have commercial interests in their own companies getting a chance to catch up with OpenAI’s progress,” said Chris Doman, CTO of Cado Security, in a statement to Dark Reading. “Frankly, it’s likely that the only company currently training an AI system more powerful than GPT-4 is OpenAI, as they are currently training GPT-5.”