The race to create smarter, more capable artificial intelligence (AI) systems is accelerating, but some prominent scientists and tech industry leaders are calling for a pause.

In an open letter released on Wednesday, more than a thousand signatories from various fields and countries, including Elon Musk and Steve Wozniak, urged a six-month moratorium on the development and deployment of advanced AI technologies.

The letter cited concerns about the potential risks and unknown consequences of AI, especially as exemplified by OpenAI's latest release, GPT-4, which promises to be even more powerful and versatile than ChatGPT.

The letter's signatories argued that a pause would allow for more reflection, dialogue, and collaboration on the societal and ethical implications of AI, and enable the development of better governance and regulation frameworks.

The letter has sparked a debate about the future of AI and the responsibility of those who create and deploy it.

What is the letter?

The letter issued by the Future of Life Institute, a nonprofit organization dedicated to mitigating global catastrophic and existential risks posed by advanced technologies, has called for a six-month pause on training AI systems more powerful than GPT-4.

It argues that AI systems with human-competitive intelligence pose significant risks to society and humanity. Advanced AI could bring about a profound change in the history of life on Earth, and should be managed with care and resources but there is a lack of planning and management for such AI, despite the acknowledged risks.

The letter further criticised an out-of-control race by AI labs to develop and deploy ever more powerful digital minds. It added that Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.

Thus, a group of scientists has called on all AI labs to pause for at least 6 months the training of AI systems more powerful than GPT-4 to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.

“If such a pause cannot be enacted quickly, governments should step in and institute a moratorium,” the letter suggested.

The letter says that AI research and development should be refocused on making today's powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal. The development of AI governance systems should also be accelerated. Humanity can enjoy a flourishing future with AI, but this must be done with caution and care.

You can read the letter in full here.

The Signatories of the Letter

A group of prominent scientists, business leaders, and scholars, including some engineers from Meta and Google, and people not in tech have signed this open letter.

Notable names from this list of signatories include Turing Prize winner Yoshua Bengio, Berkeley Professor Stuart Russell, Elon Musk, Steve Wozniak, Yuval Noah Harari, Andrew Yang, Jaan Tallinn, Max Tegmark, and Tristan Harris.

The long list of more than 1100 leading tech personalities also contains Evan Sharp, Co-Founder of Pinterest, Chris Larsen, Co-Founder of Ripple, and Craig Peters, CEO of Getty Images.

All these signatories call for the development of robust AI governance systems that include new regulatory authorities dedicated to AI, oversight and tracking of highly capable AI systems, and liability for AI-caused harm.

You can read the complete list of signatories here.

The Crux of the Arguments

The main point of the letter is to call for a pause on the development of AI systems that are more powerful than GPT-4, and to develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. The current level of planning and management for these systems is inadequate.

They propose that a pause on the development of these systems, along with the implementation of safety protocols and robust AI governance systems, will allow for a flourishing future with AI while avoiding potentially catastrophic effects on society.

Criticism of the Letter

Many critics of the letter say that it focuses too much on the hypothetical long-term risks of AGI, while ignoring the near-term risks such as bias and misinformation that are already happening, Venture Beats reported.

Arvind Narayanan, professor of computer science at Princeton, suggested that the letter fuels AI hype and benefits the companies that it is supposed to regulate, rather than society.

Alex Engler, a research fellow at the Brookings Institution, told Tech Policy Press that call for more credible and effective interventions, such as independent third-party access and auditing of large ML models, to check corporate claims, enable safe use, and identify emerging threats.

They critics also pointed out that instead of arbitrarily slowing down AI development, AI products should be made safe through regulation and audits following good practice.

"The letter isn't perfect, but the spirit is right: we need to slow down until we better understand the ramifications," said Gary Marcus, a professor at New York University who signed the letter.

Marcus also expressed concern over the growing lack of transparency among major players in the field of AI, making it difficult for society to prepare for and prevent any potential harm that may arise.

How can AI be dangerous?

While AI has the potential to bring many benefits, it also poses some potential risks and challenges. According to the Future of Life Institute website, when considering how AI might become a risk, these two scenarios are most likely:

1. The AI is programmed to do something devastating: Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause mass casualties.

2. The AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal: This can happen whenever we fail to fully align the AI's goals with ours, which is strikingly difficult.

Will AI replace jobs?

One of the concerns surrounding the development of AI is that it may replace or reduce the number of jobs available today. As AI technology advances, it may become more efficient and cost-effective for companies to use machines instead of human workers for certain tasks.

Even Sam Altman, CEO of OpenAI that created ChatGPT, also expressed his fear about job losses. In his recent interview with ABC News, Altman predicts that AI will eventually replace some jobs, and he is concerned about how fast it could happen.

"I think over a couple of generations, humanity has proven that it can adapt wonderfully to major technological shifts," Altman said. "But if this happens in a single-digit number of years, some of these shifts ... That is the part I worry about the most."

He suggests that people should view AI as a tool, not a substitute. He asserts that human creativity knows no bounds, and as new jobs emerge, people will find new things to do.

In conclusion, the development of artificial intelligence presents both likely benefits and risks. While AI has the potential to revolutionize industries and improve our quality of life, it also can disrupt the job market, create new ethical dilemmas, and pose significant risks to our privacy and security. The letter by the experts is just a sign of its acknowledgment.

Avatar

Title:EXPLAINED: Why Elon Musk and Experts Want to Pause AI Research for Six Months?

By: Mayur Deokar

Result: Explainer