Industry asked for trusted computing shielding of artificial intelligence (AI) in information warfare
ARLINGTON, Va. – U.S. military researchers are reaching out to industry to prevent enemy attempts to corrupt or spoof artificial intelligence (AI) systems by subtly altering or manipulating information the AI system uses to learn, develop, and mature.
Officials of the U.S. Defense Advanced Research Projects Agency (DARPA) issued a solicitation on Wednesday (DARPA-PA-19-03-09) for the Reverse Engineering of Deceptions (RED) project, which aims at reverse engineering the toolchains of information deception attacks.
A deceptive information attack describes enemy attempts subtly to alters or manipulates information used by a human or machine learning system to alter a computational outcome in the adversary’s favor.
Machine learning techniques are susceptible to enemy information warfare attacks at training time and when deployed. Similarly, humans are susceptible to being deceived by falsified images, video, audio, and text. Deception plays an increasingly central role in information warfare attacks.
The Reverse Engineering of Deceptions (RED) effort will develop techniques that automatically reverse engineer the toolchains behind attacks such as multimedia falsification, enemy machine learning attacks, or other information deception attacks.
Recovering the tools and processes for such attacks provides information that may help identify an enemy. RED will seek to develop techniques that identify attack toolchains automatically, and develop scalable databases of attack toolchains.
RED Phase 1 will produce trusted-computing algorithms to identify the toolchains behind information deception attacks. The project's second phase will develop technologies for scalable databases of attack toolchains to support attribution and defense.
The project also seeks to develop techniques that require little or no a-priori knowledge of specific deception toolchains; automatically cluster attack examples together to discover families of deception toolchains; generalize across several information deception scenarios like enemy machine learning and media manipulation; require just a few attacks to learn unique signatures; and scale to internet volumes of information.
Companies interested should upload 8-page proposals no later than 30 July 2020 to the DARPA BAA Website at https://baa.darpa.mil/. Email questions or concerns to Matt Turek, the DARPA RED program manager, at [email protected].
More information is online at https://beta.sam.gov/opp/f108cad02f824285af5ca85e1f7481f4/view.
John Keller | Editor-in-Chief
John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.