Last week, a group of technology activists and artificial intelligence experts, including Elon Musk, presented an open letter calling for a six-month freeze on the development of artificial intelligence systems more powerful than GPT-4 due to its “dangers to society and humanity.” Although halting this trend could help better understand and regulate the social risks posed by artificial intelligence, some have argued that this effort by rivals of some of the leaders in the field, such as OpenAI done so that they can compete in this space.
The Gartner analyst explained in an interview with VentureBeat that “the six-month hiatus is a request to stop training models more powerful than GPT-4. GPT-5 will be released soon after GPT 4.5, which is expected to achieve AGI (Artificial General Intelligence). “Once AGI becomes available, it will likely be too late to develop safety controls that effectively protect humans from using these systems.”
The opinion of cyber security experts about stopping the development of artificial intelligence
Despite concerns about societal risks posed by productive artificial intelligence, many cybersecurity experts have also pointed out that stalling the development of artificial intelligence will not help at all. Instead, they have argued that the decision will provide a temporary opportunity for security teams to develop their defenses.
One of the most persuasive arguments put forth against halting AI research claims that the decision will only affect vendors, not threat actors. According to this reasoning, cybercriminals will continue to have the ability to develop new attacks as well as strengthen their offensive techniques.
McAfee CTO Steve Grubman told VentureBeat:
“Stopping the development of the next generation of artificial intelligence will not prevent this technology from moving in dangerous directions. “As technology advances, it is essential to have organizations and companies with norms and standards that keep up with the advancement of technology to ensure that technology is used in the most responsible way possible.”
In fact, industry experts suggest that instead of stopping the development of new models, focusing on how to manage the risks associated with the malicious use of productive AI and encouraging AI vendors to be more transparent could help reduce the risks associated with this field.