AI Extinction Risk on Par with Nuclear War, Say Industry Leaders
A group of prominent industry leaders in the field of artificial intelligence (AI) has issued a concise statement, underscoring their apprehensions about the risks associated with advanced AI systems. The statement, which effectively acknowledges the concerns previously expressed by figures like Elon Musk, was posted on the website of the Center for AI Safety—an organization dedicated to mitigating societal-scale risks posed by AI.
The signatories of the statement represent a who’s who of the AI industry, featuring influential figures such as Sam Altman, the CEO of OpenAI, and Demis Hassabis, head of Google DeepMind. Notable Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, widely regarded as pioneers in modern AI, have also added their names to the list.
This is the second such statement released in recent months. Back in March, Elon Musk, Steve Wozniak, and over a thousand others called for a temporary halt on AI development to allow for the industry and the public to better understand and catch up with the technology. The letter emphasized the exponential growth of AI capabilities and the potential lack of understanding, predictability, and control over increasingly powerful AI systems.
While AI may not possess self-awareness as depicted in popular media, it already poses risks of misuse and harm through phenomena like deepfakes and automated disinformation. Furthermore, large language models (LLMs) like ChatGPT and Bard could reshape the landscape of content creation, art, and literature, potentially impacting various industries and employment sectors.
US President Joe Biden recently commented on the subject, acknowledging the potential of AI to address significant challenges such as disease and climate change. However, he also emphasized the need for tech companies to prioritize the safety of their AI products before bringing them to the market. During a recent White House meeting, Sam Altman called for AI regulation due to the possible risks associated with the technology.
Amidst a myriad of perspectives surrounding AI, the new concise statement aims to highlight a shared concern about AI risks, even among parties who may not agree on the exact nature of those risks.
“AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI,” reads the preamble to the statement. “However, it can be challenging to articulate concerns regarding some of the most severe risks posed by advanced AI. The following succinct statement seeks to overcome this obstacle, foster open discussion, and raise awareness about the growing number of experts and public figures who genuinely recognize and take seriously the most severe risks associated with advanced AI.”