Wanted: help in trusting military artificial intelligence (AI) and machine learning foundation models
ARLINGTON, Va. – U.S. military researchers are asking industry for ways of assuring trust in complex military artificial intelligence (AI) programming, so that military leaders could use sophisticated robotics for crucial jobs without concern for erroneous and dangerous results.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) in Arlington, Va., issued a advanced research concepts opportunity (DARPA-EA-24-01-0) last week for the Safe and Assured Foundation Robots for Open Environments (SAFRON) project.
SAFRON seeks to enhance trust in AI-based robots that use sophisticated foundation models, which can help computer experts develop AI and machine learning for military applications quickly and cost-effectively.
Foundation models are trained on extremely large datasets, and are large deep-learning neural networks that can form the foundation of new AI applications that do not require scientists to develop AI programs from scratch.
Related: The next generation in digital sensor and signal processing
Foundation models describe machine learning that is trained on a broad spectrum of generalized and unlabeled data that is capable of performing a wide variety of general tasks such as understanding language, generating text and images, and conversing in natural language.
Foundation models, for example, can enable robots to parse natural-language directions for complex tasks and then execute those tasks in real-world conditions. Although this represents a dramatic break from existing autonomous systems, natural-language direction for open-world autonomy presents a safety and assurance challenge.
Current methods to assure learning-enabled systems are inadequate to address AI foundation models, DARPA researchers say. Formal neural network verifiers to date have been effective only in limited narrow scenarios, but do not scale to large foundation models.
Related: Trusted computing shields military computers from cyber thieves
In addition existing training and alignment methodologies are not very robust, and do not consider complex behaviors that AI-enabled robots may be expected to encounter in an unconstrained open-world environment. Even worse, foundation models are known to exhibit errant behavior such as hallucination, and false confidence in reasoning.
Assurances are crucial to deploy foundation models-enabled robots; a robot controlled by a hallucinating foundation models could fail to execute a critical task, researchers point out.
Instead, the SAFRON project seeks to answer the core question: how, and to what extent, can we assure that foundation model-enabled robots will behave only as directed and intended?
SAFRON seeks to explore approaches that will lead to assurances about the behavior of robots that use foundation models, with particular emphasis on robots that receive commands in natural language; operate in unstructured open-world environments; and incorporate the foundation models in closed-loop decision making. Approaches that provide assurances subject to minimal additional supervision from a human operator in real time also are of interest.
Companies interested should submit abstracts no later than 13 Jan. 2025 to the DARPA submission website at https://baa.darpa.mil.
Those submitting promising abstracts may be invited to give oral presentations. Email questions to [email protected]. More information is online at https://sam.gov/opp/1f2357e16b9b4d849ab9a0533a012790/view.
John Keller | Editor-in-Chief
John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.