WASHINGTON – The U.S. Department of Defense recently held a public comment meeting at Stanford University to discuss artificial intelligence ethics. The Defense Innovation Board is planning to provide DOD with ethical guidelines and recommendations in a report this summer for the development or acquisition of autonomous systems. Biometric Update.com reports. Continue reading original article
The Military & Aerospace Electronics take:
7 May 2019 -- DOD Deputy General Counsel Charles Allen said at the start of the meeting that the military’s artificial intelligence (AI) policy will be created to adhere to international humanitarian law, limits on AI in weaponry placed by a 2012 DOD directive, and the military’s 1,200-page long manual on the law of war.
Later, former U.S. marine Peter Dixon, founder and CEO of Second Front Systems, said AI used to identify people in drone footage can save lives. “If we have an ethical military, which we do, are there more civilian casualties that are going to result from a lack of information or from information?” he asked.
Biometric technologies could be used for weapons targeting by autonomous, and multiple speakers expressed concern about such uses of autonomous systems for that application. Other recommendations include a more unified national policy to compete internationally, the development of self-confidence reporting by AI systems, and closer cooperation between academia and the AI industry.
Related: Users of autonomous weapons with artificial intelligence must follow a technological code of conduct
Related: Electronic warfare technology heading-up the battlefield
Related: The increasing role of COTS in high-fidelity simulation
John Keller, chief editor
Military & Aerospace Electronics
Ready to make a purchase? Search the Military & Aerospace Electronics Buyer's Guide for companies, new products, press releases, and videos