Artificial intelligence increases extinction risk, experts say in new warning


Scientists and tech industry leaders, including high-level executives at Microsoft and Google, issued a new warning Tuesday that artificial intelligence is a threat to the human race.

“Mitigating the risk of extinction from AI should be a global priority, along with other societal-level risks such as pandemics and nuclear war,” the statement said.

Sam Altman, CEO of ChatGPT maker OpenAI, and Geoffrey Hinton, the computer scientist known as the godfather of artificial intelligence, were among hundreds of prominent figures who signed the statement, which was posted on the website of the Center for AI Safety.

With the rise of a new generation of highly capable AI chatbots like ChatGPT, concerns have intensified about artificial intelligence systems overtaking humans and running wild. This has sent countries around the world scrambling to come up with rules for the developing technology, with the European Union expected to ratify it later this year with its AI Act.

Dan Hendricks, executive director of the San Francisco-based nonprofit center, said the latest warning was intentionally brief — just one sentence — to include a broad coalition of scientists who may not agree on the most likely risks or the best solutions to prevent them. Can For AI Safety, which organized the move.

“There are a variety of people from all the top universities in different fields who are concerned with this and think it’s a global priority,” Hendrix said. “So we were forced to get people to come out of the closet on this issue, because many people were quietly speaking amongst themselves.”

More than 1,000 researchers and technologists, including Elon Musk, signed a very long letter earlier this year demanding a six-month pause on AI development, saying it poses a “profound risk to society and humanity”. ” Is.

That letter was a response to OpenAI’s release of a new AI model, GPT-4, but leaders from OpenAI, its partner Microsoft and rival Google did not sign and rejected calls for a voluntary industry pause.

In contrast, the latest statement was supported by Microsoft’s chief technology and science officers as well as Demis Hassabis, CEO of Google’s AI research lab DeepMind, and two Google executives who lead AI policy efforts.

Some critics have complained that dire warnings about existential risks by manufacturers of AI have contributed to a distraction from calls for more immediate regulations to enhance the capabilities of their products and rein in their real-world problems.

Hendrix said there’s no reason society can’t manage the “immediate, ongoing pitfalls” of products generating new text or images, while also starting to address “potential disasters right around the corner.” Is.

He warned people to be careful, comparing it to nuclear scientists in the 1930s, even though “we haven’t developed the bomb yet.”

“No one is saying that GPT-4 or ChatGPT is causing these kinds of concerns today,” Hendrix said. “We are trying to address these risks before they happen, not disasters after the fact.”

The letter was also signed by experts in nuclear science, epidemiology and climate change. Among the signatories is author Bill McKibben, who sounded the alarm on global warming in his 1989 book “The End of Nature” and warned about AI and companion technologies in another book two decades ago.

“Given our failure to heed the early warnings about climate change 35 years ago, it seems to me like it would be smart to really think about this before it’s all a done deal,” he said by email on Tuesday.


This news is auto-generated through an RSS feed. We don’t have any command over it. News source: Multiple Agencies: hindustantimes, techrepublic, computerweekly,


Please enter your comment!
Please enter your name here