More than 1000 Artificial Intelligence experts around the world including Elon Musk have signed an open petition to halt the development and research of AI systems and programs more powerful and advanced than GPT 4. The letter asks for a six-month break in all work related to advanced AI as experts feel that this development can be a potential threat to human society. 

The open letter titled ‘Pause Giant AI Experiments’ was issued by Future Of Life Institue a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, particularly the existential risk from advanced artificial intelligence. The letter is signed by famous dignitaries, AI scientists and professors, and experts like the Turing Prize winner Yoshua Bengio, Berkeley Professor of Computer Science Stuart Russell, Apple co-founder Steve Wozniak, Author of Homo Sapiens Yuval Noah Harari, Stability AI CEO Emad Mostaque, and more. 

The letter states that AI systems with human-competitive intelligence pose profound risks to society and humanity and top AI labs have also acknowledged this potential risk.  It further stated that recent AI development has seen a race to develop and deploy more powerful digital minds that may be difficult to control and such powerful AI systems should be developed only once their positive effects and risks can be confidently managed.

Finally, the petition calls for a 6-month pause on training AI systems more powerful than GPT-4. It asks all AI labs and independent experts to develop shared safety protocols that ensure systems are safe beyond a reasonable doubt and says that  AI developers must work with policymakers to accelerate the development of robust AI governance systems. 

Gary Marcus, a professor at New York University who is also a signatory of the letter said that the letter though isn’t perfect, is right in spirit and until we understand the ramifications of AI, it is better to slow down. He further added that the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize and AI can cause serious harm to humans.

Across the globe, many institutions and governments have voiced their concerns regarding the ethical and legal issues created by advanced AI like ChatGPT. Recently Europol cautioned about the possible exploitation of AI systems like ChatGPT for cybercrime, phishing attempts, and spreading disinformation. Many governments are working on legal policies and regulatory frameworks to control AI. 

Many educational institutions and universities have also raised their concerns regarding ChatGPT and similar AI programs. For instance, several  New York colleges have decided to prohibit students from using ChatGPT to complete their homework. To enforce this measure, the universities have blocked access to the ChatGPT website through their servers. The education department has also supported this move by restricting access to ChatGPT due to concerns over its potential negative impact on student learning and the accuracy and safety of the content.

Related:


(via)