Users of autonomous weapons with artificial intelligence must follow a technological code of conduct
WASHINGTON – Can a victor truly be crowned in the great power competition for artificial intelligence? According to Russian President Vladimir Putin, “whoever becomes the leader in this sphere will become the ruler of the world.” The Hill reports. Continue reading original article
The Military & Aerospace Electronics take:
23 April 2019 -- By 2035 the U.S. expects to have ground forces teaming up with robots. The discussion on how autonomous weapons should responsibly be integrated with human military elements, however, is slowly unfolding. As Congress begins evaluating what the Defense Department should do, it must also consider preparing tomorrow’s warfighters for how armed robots will test military ethics.
Each service branch teaches members to follow a code of conduct like the Soldier’s Creed and Warrior Ethos, the Airman’s Creed, and the Sailor’s Creed. Reflected across these distinct codes, however, is a shared commitment to a value-system of duty, honor, and integrity, among others.
The Warrior-in-the-Design concept embodies both the Defense Directive that autonomous systems be designed to support the human judgment of commanders and operators in employing lethal force, and Human Rights Watch's definition of human-out-of-the-loop weapons i.e., robots that can select targets and apply force without human input or interaction.e
Related: The next 'new frontier' of artificial intelligence
John Keller, chief editor
Military & Aerospace Electronics
Ready to make a purchase? Search the Military & Aerospace Electronics Buyer's Guide for companies, new products, press releases, and videos