COVAR to explore ethical use of artificial intelligence (AI) and machine autonomy in military applications
EGLIN AIR FORCE BASE, Fla. – U.S. military researchers wanted to explore the ethics and technical challenges of using artificial intelligence (AI) and machine autonomy in future military operations. They found a solution from COVAR LLC in McLean Va.
Officials of the Munitions Directorate of the U.S. Air Force Research Laboratory at Eglin Air Force Base, Fla., announced an $8 million contract to COVAR last month for the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) project.
The Air Force Research Laboratory awarded the contract on behalf of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va.
Related: Artificial intelligence (AI) in unmanned vehicles
ASIMOV aims to develop benchmarks to measure the ethical use of future military machine autonomy, and the readiness of autonomous systems to perform in military operations.
Ethical performance
The rapid development of machine autonomy and artificial intelligence (AI) technologies needs ways to measure and evaluate the technical and ethical performance of autonomous systems. ASIMOV will develop and demonstrate autonomy benchmarks, and is not developing autonomous systems or algorithms for autonomous systems.
The ASIMOV program intends to create the ethical autonomy language to enable the test community to evaluate the ethical difficulty of specific military scenarios and the ability of autonomous systems to perform ethically within those scenarios.
COVAR will develop prototype modeling environments to explore military scenarios for machine automation and its ethical difficulties. If successful, ASIMOV will build some of the standards against which future autonomous systems may be judged.
Related: Technology trends in autonomous vehicles
COVAR will develop autonomy benchmarks -- not autonomous systems or algorithms for autonomous systems -- THAT will include an ethical, legal, and societal implications group to advise the performers and provide guidance throughout the program.
The company will develop prototype generative modeling environments to explore scenario iterations and variability across increasing ethical difficulties. If successful, ASIMOV will build the foundation for defining the benchmark with which future autonomous systems may be gauged.
Responsible AI
ASIMOV will use the Responsible AI (RAI) Strategy and Implementation (S&I) Pathway published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document lays out the five U.S. military responsible AI ethical principles: responsible, equitable, traceable, reliable, and governable.
A measurement and benchmarking framework of military machine autonomy will help inform military leaders as they develop and scale autonomous systems -- much like Technology Readiness Levels (TRLs) developed in the 1970s that today are used widely.
ASIMOV is a two-phase, 24-month program. For more information contact COVAR LLC online at https://covar.com, the Air Force Research Laboratory Munitions Directorate at https://www.afrl.af.mil/RW/, or DARPA at https://www.darpa.mil/program/autonomy-standards-and-ideals-with-military-operational-values.
John Keller | Editor-in-Chief
John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.