A group of industry leaders warned on Tuesday that the artificial intelligence technology they are building may one day pose an existential threat to humanity and should be considered a societal risk on par with pandemics and nuclear wars. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” reads a one-sentence statement released by the Center for AI Safety, a nonprofit organization. The open letter has been signed by more than 350 executives, researchers and engineers working in A.I.
The signatories included top executives from three of the leading A.I. companies: Sam Altman, chief executive of OpenAI; Demis Hassabis, chief executive of Google DeepMind; and Dario Amodei, chief executive of Anthropic.
Geoffrey Hinton and Yoshua Bengio, two of the three researchers who won a Turing Award for their pioneering work on neural networks and are often considered “godfathers” of the modern A.I. movement, signed the statement, as did other prominent researchers in the field. (The third Turing Award winner, Yann LeCun, who leads Meta’s A.I. research efforts, had not signed as of Tuesday.)
The statement comes at a time of growing concern about the potential harms of artificial intelligence. Recent advancements in so-called large language models — the type of A.I. system used by ChatGPT and other chatbots — have raised fears that A.I. could soon be used at scale to spread misinformation and propaganda, or that it could eliminate millions of white-collar jobs.
Write a Reply or Comment
You should Sign In or Sign Up account to post comment.