Tech Leaders and AI Experts Call for Six-Month Pause on ‘Out-of-Control’ AI Experiments
A group of tech leaders and prominent AI researchers have published an open letter urging AI labs and companies to halt their work for at least six months. The letter, signed by industry veterans including Steve Wozniak and Elon Musk, warns that the risks of AI are being ignored and that existing systems must be enjoyed and tested before deploying new, more advanced technologies like GPT-4, a model by OpenAI.
GPT-4 is a technology that can respond to written or visual messages and has been utilized by companies like Microsoft for its revamped Bing search engine. Google has also recently introduced its own generative AI system, Bard, which is powered by LaMDA. The race to deploy the most advanced AI technology has drawn concern from some in the industry, who believe that the technology is being developed and deployed without proper planning or management.
“Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” the letter states.
The letter, published by the Future of Life Institute (FLI), an organization dedicated to minimizing the risks and misuse of new technology, also emphasizes the importance of care and forethought to ensure the safety of AI systems. Musk has previously donated $10 million to FLI for use in studies about AI safety.
Other signatories of the letter include global AI leaders like Center for AI and Digital Policy president Marc Rotenberg, MIT physicist and Future of Life Institute president Max Tegmark, and author Yuval Noah Harari. Harari also co-wrote an op-ed in the New York Times last week warning about AI risks, along with founders of the Center for Humane Technology and fellow signatories, Tristan Harris and Aza Raskin.
The open letter is another indication of the growing unease around AI and the need for more careful and considered development and deployment of the technology.