This week, big names in the industry like Elon Musk, Yoshua Bengio, Steve Wozniak among others published a letter where they asked to put 6 month moratorium on making AI systems that are bigger and more powerful than GPT-4.

In summary:

  • With the latest AI system “no one – not even their creators – can understand, predict, or reliably control”.
  • AI systems are becoming more competitive and humans need to control the extent of their impact. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable“.
  • Main ask: “we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”
  • AI labs and experts need to develop protocols on checking if the AI system is safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.
  • One of the papers that the letter reference, states that there are 8 hazards that AI systems may pose risk of. See the list in the header picture. I referenced some of the power seeking risks in one of my previous: here and here.

Personally, I don’t think this will work, as there is no practical way to implement this restriction. The governments could potentially intervene but it creates a very dangerous precedent.

GPT-4 and similar systems have so much potential to improve life, especially in education, healthcare and even tech sectors. In my view, this open letter will help private and public sector to think through regulations on transparency on companies that have a scale to create these giant models.


If you like what I write, consider subscribing to my newsletter, where I share weekly practical AI tips, write my about thoughts on AI and experiments.
Credits for the header photo from paper X-Risk Analysis for AI Research https://arxiv.org/pdf/2206.05862.pdf


This article reflects my personal views and opinions only, which may be different from the companies and employers that I am associated with.