Elon Musk’s letter to stop the development of artificial intelligence caused controversy

A few days ago, an open letter was published to stop the development of artificial intelligence, which was signed by people like Elon Musk, the CEO of Tesla and Twitter, and Steve Wozniak, the co-founder of Apple. Of course, the number of people who signed it exceeded 1000 people. But now a number of people say that they did not sign it at all, and some experts have condemned the use of their research in such a letter.

In this letter, which caused a lot of noise, the technology activists asked the laboratories and companies active in the field of artificial intelligence to stop the development of systems more powerful than the GPT-4 artificial intelligence model. A number of engineers from Amazon, DeepMind, Google, Meta and Microsoft have also signed this letter.

GPT-4 artificial intelligence model

The GPT-4 model, which was recently introduced by the company OpenAI, can create conversations similar to humans and is even able to recognize images. Of course, this model has various other uses; Like writing songs or summarizing long texts. According to the claim of the activists of the technology world, these artificial intelligence systems can be a great danger to humanity:

“Independent experts and AI development labs should use this pause to jointly develop a set of security protocols for AI development and design; “Protocols that should be strictly reviewed and monitored by external experts.”

Criticism of the publication of fake letters and signatures

This letter was written by the effort of the Future of Life Institute or FLI, and in it, 12 different researches from experts including professors and former engineers of OpenAI, Google and DeepMind are cited. But now, according to the Guardian, 4 experts whose research is cited in this letter have expressed their concern about the claims made.

In addition, the published letter lacked verification protocols for signatures, and it is now clear that some people did not sign it at all. Among these people, we can mention “Xi Jinping” and “Yan Likan”, the chief scientist of artificial intelligence of Meta. Likan in twitter has officially declared that he did not sign this letter and also does not support it.

Yan Likan
Yan Likan

Critics of the latest letter say the FLI, which is largely funded by the Musk Foundation, has resorted to fanciful apocalyptic scenarios instead of addressing the fundamental problems of artificial intelligence. Some of the basic problems of artificial intelligence include racist and sexist behaviors.

Margaret Mitchell, one of the former managers of Google’s artificial intelligence department and current Hugging Face scientist, whose paper was cited in the recent letter, criticized it and said that it is not clear what the authors of the letter meant by “more powerful than GPT-4”. What has been:

“This letter articulates a set of priorities and narratives about AI that benefit FLI supporters. “Ignoring current injuries is a privilege that some researchers don’t take advantage of.”

Another researcher, Shiri Dori Hakuhen, who has written about the impact of the current use of artificial intelligence systems on decisions about climate change, nuclear war, and other existential threats, says that artificial intelligence does not need to increase such risks. It does not have human-level intelligence.

In response to these criticisms, Max Tegmark, the president of FLI, says that the long-term and short-term risks of artificial intelligence should be taken seriously: “When we cite a person in a letter, that person is confirming the sentence in question, not the whole letter. Furthermore, citing this person does not mean that we endorse all of his thoughts.”

Source link

Posts created 2503

Leave a Reply

Your email address will not be published. Required fields are marked *

Related Posts

Begin typing your search term above and press enter to search. Press ESC to cancel.

Back To Top