Military takes on question of when AI is the right thing to do

Nov. 26, 2024
What's making a lot of people nervous is the increasing use of military AI, and how far can we go before dancing too close to that line.

THE MIL & AERO COMMENTARY – We hear a lot of discussion about what capabilities artificial intelligence (AI) and machine learning can offer to the military. It's less common to talk about when or why AI might be the right thing to do. That's a much deeper, complex, and philosophical issue.

At its core, the military's AI dilemma boils down to this: where in the military's chain of command does human reasoning and decision-making end, and where do computers take over? That's an extremely touchy subject, and encompasses the question of can we trust computers to make life-or-death decisions -- ranging from strategic deployments of military forces, to whether or not to pull the trigger on a suspected terrorist. This also involves deep consideration of who's really in charge of crucial military decisions -- people or machines?

What's making a lot of people nervous is the increasing use of military AI, and how far can we go before dancing too close to that line.

Remove from science fiction

Science fiction aside, AI and machine learning are proving to be valuable assistants to human decision makers. Machines process data much more quickly than human brains do, and can lay out a range of suggestions on which way to turn in difficult situations. The longer the military integrates AI into its reconnaissance and combat systems, the more commanders become comfortable with it, and the more difficult it becomes to draw a clear line of where the use of AI ends, and when humans have to take over.

Related: Artificial intelligence and machine learning for unmanned vehicles

Answering these questions is not an enviable task, but the military, nevertheless, is starting to confront the issue. In October U.S. military researchers announced an $8 million contract to COVAR LLC in McLean Va., for the Autonomy Standards and Ideals with Military Operational Values (ASIMOV) project.

ASIMOV seeks to find ways to measure the ethics of using military machine autonomy, and the readiness of autonomous systems to perform in military operations. The project aims to develop benchmarks to measure the ethical use of future military machine autonomy, and the readiness of autonomous systems to perform in military operations.

The ASIMOV program intends to create an ethical autonomy language to enable the test community to evaluate the ethical difficulty of specific military scenarios and the ability of autonomous systems to perform ethically within those scenarios.

Related: Artificial intelligence and machine learning aim to boost tempo of military operations

COVAR will develop prototype modeling environments to explore military scenarios for machine automation and its ethical difficulties. If successful, ASIMOV will build some of the standards against which future autonomous systems may be judged.

Ethical difficulties

COVAR will develop autonomy benchmarks -- not autonomous systems or algorithms for autonomous systems -- to include an ethical, legal, and societal implications group to advise and provide guidance throughout the program.

The company will develop prototype generative modeling environments to explore scenario iterations and variability across increasing ethical difficulties. If successful, ASIMOV will build the foundation for defining the benchmark with which future autonomous systems may be gauged.

Related: Users of autonomous weapons with artificial intelligence must follow a technological code of conduct

ASIMOV will use the Responsible AI (RAI) Strategy and Implementation (S&I) Pathway published in June 2022 as a guideline for developing benchmarks for responsible military AI technology. This document lays out the five U.S. military responsible AI ethical principles: responsible, equitable, traceable, reliable, and governable.

A measurement and benchmarking framework of military machine autonomy will help inform military leaders as they develop and scale autonomous systems -- much like Technology Readiness Levels (TRLs) developed in the 1970s that today are used widely.

The ASIMOV project will not settle all questions related to the military's use of AI and machine learning -- far from it -- but it's a start. Not only will the project start discussions, and find real ways of measuring the ethics of AI in life-critical decisions, but it also is a step toward taking the science fiction out of the equation.

About the Author

John Keller | Editor-in-Chief

John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!