fbpx

Top AI Researchers And Elon Musk Seek Pause On AI Tech Due To Potential Danger To Humanity

Top AI Researchers And Elon Musk Seek Pause On AI Tech Due To Potential Danger To Humanity

pause AI

Citing a potential danger to humanity, at least 1,123 people including Elon Musk and prominent artificial intelligence experts and executives called for a six-month pause in developing systems more powerful than OpenAI‘s newly launched GPT-4.

In an open letter, Twitter CEO Musk and Apple co-founder Steve Wozniak were among those who signed their names to the letter asking for a “public and verifiable” pause of the AI systems until shared safety protocols for such designs are developed, implemented and audited by independent experts.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt,” the letter stated. “This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”

The letter also said that “if such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

Issued by the nonprofit Future of Life Institute, the letter itself paused publishing names of signatories at 1,124 names, citing high demand.

Backed by $10 billion in funding from Microsoft, Open AI launched the newest version of its GPT language model systems series on March 14, using deep learning to generate human-like, conversational text. 

The previous version, GPT 3.5, powered the company’s hugely popular ChatGPT chatbot when it launched four months ago in November 2022.

The new and improved GPT-4 can read an uploaded graph and make calculations based on the data presented. Or it can upload a worksheet and provide responses to questions.

Critics say it can pass the bar exam, teach people how to make bombs and replace humans. Some claim the bot generates answers with political biases. Musk has pointed out examples on Twitter, Business Insider reported.

Proponents say GPT-like AI will one day help doctors spot diseases that the human eye misses.

The intellectual capabilities are improved in GPT-4, outperforming GPT-3.5 in a series of simulated benchmark exams, Sabrina Ortiz wrote for ZDnet.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter said.

Musk, a co-founder of OpenAI who no longer has ties to the company, has criticized the company in recent months, Business Insider reported. He accused the previously nonprofit OpenAI of becoming a “maximum profit company” through its partnership with Microsoft.  He also described its technology as “scary good” on Twitter and said that “we are not far from dangerously strong AI.”

Experts say the technology could lead to “catastrophic outcomes” if the AI systems behave in unexpected ways or get difficult to control. 

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asked. “Such decisions must not be delegated to unelected tech leaders. Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

Speaking on March 25 on the Lex Fridman podcast, OpenAI CEO Sam Altman said he understands Musk’s reservations about artificial general intelligence (AGI) – AI systems that are capable of thinking and learning intellectual tasks like a human. He also talked about Musk’s accusation of bias.

“I think it was too biased — and will always be. There will be no one version of GPT that the world ever agrees is unbiased,” Altman said.

Altman told Fridman, “I think it’d be crazy not to be a little bit afraid, and I empathize with people who are a lot afraid … The current worries that I have are that there are going to be disinformation problems or economic shocks, or something else at a level far beyond anything we’re prepared for. And that doesn’t require superintelligence.” 

He raised the hypothetical example that large language models, known as LLMs, could influence the information and interactions social media users experience on their feeds. 

“How would we know if on Twitter, we were mostly having like LLMs direct whatever’s flowing through that hive mind?” Altman asked.

Axios predicted that reaching out to policymakers will likely “land on deaf ears. U.S. lawmakers are woefully behind on how technological advancements impact the country — they’re still struggling to deal with the advent of social media.”

In the U.S., market forces have long been the primary driver for the growth of specific innovations, according to Axios writer Peter Allen Clark. “Few, if any, tech advancements are coupled with the level of forethought and even-mindedness the letter’s authors request.”

Photos: OpenAI co-founder and CEO Sam Altman speaks at TechCrunch Disrupt San Francisco, Oct. 3, 2019. (Steve Jennings/Getty for TechCrunch),
https://en.wikipedia.org/wiki/en:Creative_Commons
https://creativecommons.org/licenses/by/2.0/deed.en Tesla CEO Elon Musk, Sept. 25, 2020 in Los Angeles (zz/Wil R/STAR MAX/IPx)