Artificial Intelligence Experts: Stop Before It’s Too Late

( – One-thousand tech leaders and experts, including billionaire Elon Musk and Apple’s co-founder Steve Wozniak, have signed an open letter urging a halt to artificial intelligence (AI) development.

Musk, Wozniak, and other AI experts insist that the advancement of artificial intelligence technology should be paused until the adoption of reliable safeguards to protect humanity from unwanted scenarios.

The letter signed by the 1,000 was written by a nonprofit called “Future of Life Institute,” Forbes reported, as cited by Breitbart News.

In it, the tech experts and executives call for imposing a moratorium on AI systems more potent than OpenAI’s GPT-4.

The proposed temporary ban would last for at least six months to allow for more robust governance and shared safety protocols to be developed.

The Future of Life Institute, which made the letter’s publication possible, is a group “dedicated to directing transformative technology toward enhancing life and reducing significant risks.”

Besides tech sector “titans” such as Tesla and Twitter’s CEO Elon Musk and Apple’s co-founder Steve Wozniak, the signatories of the AI letter include researchers from DeepMind, which Google owns.

Among them are also “well-known machine learning authorities” such as Yoshua Bengio and Stuart Russell.

The open letter calls upon all artificial intelligence laboratories and independent experts in the field to collaborate to establish and implement standard safety protocols to govern the design and development of advanced AI technologies.

The protocols in question must guarantee that all AI systems abiding by them are risk-free.

Outside experts not affiliated with the respective company would audit and monitor their implementation to achieve that.

“The signatories stress that the proposed pause is only a temporary retreat from the dangerous race toward increasingly unpredictable black-box models with emergent capabilities, not a general halt to AI development,” the report points out.

Besides creating shared safety protocols, the letter calls for stronger governance systems for AI development.

These systems would include “provenance and watermarking” provisions to differentiate between “authentic and fake content and track model leaks.”

They would also feature the creation of new regulatory bodies dealing with AI oversight and tracking.

Besides that, the tech experts and leaders also recommend more public funding for research on technical AI safety and to ensure that any supplier would be held accountable for damage caused by AI.

The letter’s signatories point out that humanity has previously managed to halt the advancement of other technologies with potentially disastrous consequences and that the development of artificial intelligence should be no different.

“Although the open letter is unlikely to succeed in all of its goals, it does represent a general unease about AI technologies and increases the need for more stringent regulation,” the report comments.

What is your opinion about the speed with which artificial intelligence is being rolled out? Share your view by emailing [email protected]. Thank you.