The Pentagon has updated guidelines governing the development, testing and use of automatic and semi-automatic weapons for the first time in nearly 10 years to warn of the growing role of artificial intelligence in future battles.
The US Department of Defense last updated the guidelines for automatic weapons systems in 2012, but this week we see a new update in this document. The US Department of Defense says it remains committed to developing automated systems and integrating artificial intelligence into its future military projects. With this update, the US government puts all its focus on the development of these weapons. This happens in a situation where some countries and activists are against the development of these weapons.
The update of the guidelines addresses the development of artificial intelligence over the past 10 years and introduces a new regulatory body to manage these activities. Now, in most cases, automatic weapons must get official permission from the deputy of the joint staff of the armed forces, the deputy of policy of the Ministry of Defense and the deputy of research and engineering of the Ministry of Defense in order to be able to enter the development stages.

The Pentagon announced its plan to use artificial intelligence
“This directive now specifies that these systems, just like any system that uses artificial intelligence, whether they are weapons or not, must be subject to these directives,” said Michael Horowitz, director of the Defense Department’s Emerging Capabilities Policy Office. This decision is part of what we call good management.
The directive also applies the Ethical Use of Artificial Intelligence regulations introduced in 2020 to automated weapons systems. Overall, this update, while not a complete road map, outlines where the ministry’s future direction will go with a focus on modern technologies.
Currently, many countries are against the use of artificial intelligence in military weapons, and at least 30 of them jointly demanded the ban on the development of these systems. In 2019, UN Secretary-General Antonio Guterres said: “Automated tools with the ability and authority to choose targets and take people’s lives, in a situation where humans have no role in this process, are politically unacceptable and contrary to ethics.” »