More than a third of artificial intelligence researchers believe that its development could lead to a catastrophe "comparable to nuclear." This is evidenced by a survey of experts in the field of artificial intelligence conducted by researchers from three American universities, which is mentioned in the large Stanford University report on the development of AI for 2023.
According to the study, 36% of respondents found it plausible that artificial intelligence could cause a global catastrophe. At the same time, 71% of experts admit the possibility that the development of AI can lead to revolutionary changes in society, comparable in effect to the Industrial Revolution.
At the same time, the question of a “catastrophe comparable to a nuclear one” caused criticism from the scientific community. Most of those who agreed that artificial intelligence could cause it would prefer a milder formulation.
Experts also pointed out that AI is not capable of making its own decisions, and all its actions are references to databases of human experience, on the basis of which modern language models like ChatGPT are trained.
In March, Apple co-founder Steve Wozniak, entrepreneur Elon Musk and more than a thousand businessmen and experts signed an open letter . It calls for a worldwide ban on the training of AI systems for a period of six months.
At the same time, a well-known researcher in the field of artificial intelligence, Eliezer Yudkowsky, published a column in Time magazine in which he called not just to suspend, but to completely prohibit further training of large language models. In it, he also proposes to bomb data centers in countries that violate this agreement.