An anomaly in Elon Musk’s letter warning about AI

The letter, signed by Elon Musk and many tech figures, immediately caused a wave of outrage, with some people saying they had been impersonated to sign.

More than 30,000 people, including Tesla’s Elon Musk, Apple’s Steve Wozniak, politician Andrew Yang and several leading AI researchers signed a letter calling for a halt to training AI systems more powerful than GPT-4 within 6 months. However, the letter immediately caused outrage when some people said they had been impersonated to sign.

Impersonate the signer
After the letter was spread, many people said they were impersonated to sign, including Sam Altman – CEO of OpenAI and Yann LeCun – Meta AI researcher.
This caused a wave of outrage. However, according to supporters of the letter, this is not something to be concerned about.

“The letter is not perfect, but its purpose is right,” said Gary Marcus, a professor of psychology at New York University.

In addition to Professor Marcus, many famous people in the technology world also signed on to the letter such as Emad Mostaque – CEO of Stability AI, historian Yuval Noah Harari, Evan Sharp – founder of Pinterest, as well as some other personalities. member of Google DeepMind and Microsoft.

Related:   AI can teach soccer coaches how to counter-attack

The letter calls for a halt to AI production
The letter was written by the Future of Life Institute, a non-profit organization with a mission to “reduce global risks from technology”.

Billionaire Elon Musk is a longtime member of this organization, having donated $ 10 million to the institute in 2015.

Most members of the Future of Life Institute are “longtermism”, aiming to become super rich, in order to solve humanity’s problems in the distant future.

“Longtermism” is promoted and advocated by a number of tech elites, including Sam Bankman-Fried, CEO of FTX.

A notable excerpt from the letter:

“New AI systems should only be developed when they have a positive impact and people can control all risks. Therefore, we urge all AI labs to immediately suspend training of AI systems stronger than GPT-4 for at least 6 months. AI labs should use this time to develop a secure system for advanced AI design and development, which is rigorously tested and monitored by independent external experts.

This does not mean a pause in AI development in general, but merely a preparation to take it to the next level.”

The letter also mentions the race between tech giants like Microsoft and Google, which have released a number of new AI products in recent years.

During a press conference on March 29, computer scientist Yoshua Bengio – who signed the letter expressed concern about the “expansion” of AI and technology companies.

“Power is concentrated on tech giants and AI tools that have the potential to destabilize democracy,” Bengio said.

According to Mr. Bengio, a period of 6 months is needed for regulators, including the government, to understand, test and verify the safety of AI systems.

Related:   Headphones in the watch are a good idea, but not perfect

AI experts give feedback
Meanwhile, some experts believe that the letter is “inflating” the harmful effects of AI instead of suggesting ways to solve immediate problems.

Some argue that it only promotes “longtermism” – an ideology that is considered toxic and anti-democratic because it upholds the values of the super-rich and promotes ethical violations in the name of the super-rich. meaning for humanity.

“Basically, the letter is misdirected. It draws the reader’s attention to the hypothetical harms of AI and suggests ambiguous solutions,” said Sasha Luccioni, a scientist at Hugging Face.

According to Arvind Narayanan – Associate Professor at Princeton University (USA), the questions in the letter such as “will AI replace humans and take over human civilization” are too far-fetched, making us wonder neglect current problems. After all, AI was born to serve humans, not a “non-human mind” that makes us “outdated” as the letter describes.

In addition, Timnit Gebru, founder of Distributed AI Research Institute, said that she finds it ironic that they call for a halt to the use of models more powerful than GPT-4, but do not address the concerns surrounding around GPT-4.

“Urgent issues like data theft, poverty in Africa or how to develop society… are what we need to be concerned about,” Gebru said.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button