NASHUA, N.H. - The U.S. Department of Defense (DOD) has a bevy of technological hardware and software at its disposal to provide warfighters with the most up-to-date intelligence. At the heart of disseminating that information is the ability to process images captured by a range of devices.
Of course, it is necessary to separate actionable intelligence from the digital chaff. Uncrewed vehicles often are at the tip of the spear in aiding data gathering to shield warfighters from harm.
Image processing industry experts note that the DOD is continuing to make significant investments in this sector.
Dan Mor is director of the video and general-purpose computing on graphics processing units (GPGPU) division at Aitech Defense Systems Inc. in Chatsworth, Calif. He says that many market researchers believe that artificial intelligence (AI) and image processing in the military and aerospace industry is anticipated to “register significant market growth during through 2025.” This growth, Mor says, is driven by the usefulness shown by AI and image processing.
“Systems powered by AI can perform many tasks, such as object recognition, classification, making conclusions and predictions, that not long ago were assumed to require human cognition, Mor says. “And, as AI’s capabilities have dramatically expanded, we have seen a growing number of use cases in different fields, including the defense and space market.
“Much of this stems from the increased interest seen in these market studies, driven by recent trends of using robotics in the defense and space industry as well as the integration of AI in avionics, ground mobile and ground fix platforms, Mor continues. “The implementation of AI and the changing conventional weapon to ‘smart’ battlefields that better utilize imaging and graphic data will enhance the performance of existing platforms of armed forces around the world. We’re seeing a significant growth in GPGPU-based processing platforms that are enabling AI performance to the edge. This is a huge leap forward in the realm of image processing, as networks and capture points become more diverse both in location and type of inputs.”
Open architecture
The DOD has long been pushing for vendor-agnostic components through open-systems standards programs and design approaches like Sensor Open Systems Architecture (SOSA) and Modular Open Systems Approach (MOSA).
Image processing hardware — like nearly everything under the purview of the DOD — has a focus on using commercial off-the-shelf (COTS) components to reduce cost and enable technicians to replace parts or upgrade equipment.
“The defense and space markets are always looking for small form factor (SFF) and size, weight and power (SWaP) or size, weight, power and cost (SWaP-C) optimized systems, so we, as an industry, need to be developing critical rugged AI GPGPU systems for these market verticals,” Mor says.
Overlapping needs
For many people, the phrase “metaverse” may have been learned when Facebook founder Mark Zuckerberg announced that the social network would be jumping into that space with both feet. The metaverse is a 3D space that enables people from all over the world to socialize, collaborate, learn, and more. The military is also going “meta” as well, says Ike Song, president and general manager of strategy at the Mercury Systems mission systems segment in Andover, Mass.
“The military is trying to have its own version of the metaverse, whether it’s a distributed interactive simulation, (or) live virtual (reality),” Song says.
Song explains that the Army is undertaking a way for soldiers to use technology in a single platform in which they can not only train and rehearse but fight as well.
One system, called the Integrated Visual Augmentation System (IVAS), provides visual intelligence directly to soldier via augmented reality (AR) goggles. In addition, the IVAS system connects the ground-based soldiers with crews in vehicles like Strykers and Bradley Fighting Vehicles.
“Up until this point IVAS has really been focused on the dismounted Soldiers and getting that fighting goggle right,” Army Major Kevin Smith, C5ISR Night Vision and Electronic Sensors Direction (NVESD) Research and Development Coordinator and PM IVAS Platform Integration Directly Responsible Individual (DRI) said in 2021. “So, in parallel, we in the Night Vision Electronic Sensors Directorate have been working to build-in applications to leverage both new and existing sensors on the vehicles to give the Soldier not just enhanced visual situational awareness, but also C2 [Command and Control] situational awareness while they’re inside of a platform or vehicle.”
On the screen
Whether in a soldier’s goggles, on a ship, or in the sky, actionable intelligence must be disseminated to a commander or to the warfighter directly.
“Future technologies like AR [augmented reality] could facilitate the ability for remote users to have the views and situational awareness of a manned vehicle whileoperating from the safety of a hardened remote position,” says Richard Pollard, a senior product manager at Curtiss-Wright Corp. in Davidson, N.C. “The same technologies used in AR solutions will also facilitate sharing of unmanned system video more easily to multiple users locally and globally. Each system could become a mobile situational awareness node giving total coverage of a combat area.”
Digital displays across the board have increased fidelity while dropping in price over the past two decades. Mercury’s Song notes that consumer products like televisions have gotten much larger than previous decades. That, and comfort and familiarity with personal devices that have screens — touch and otherwise — has been utilized by military technology as well.
“The emergence of ruggedized commercial HMI [human machine interface] technologies that today’s operators are already familiar with from their smart phones and tablets being used in rugged display solutions gives any user the feeling of immediate familiarity with their vehicles systems,” Curtiss-Wright’s Pollard says. “This reduces training burden and lowers cognitive load allowing the operator to deploy more quickly and focus on the mission instead of operating their equipment.”
Reducing risk
Image processing can result in a deluge of raw data — the majority of which may prove unactionable. Aitech’s Mor notes that uncrewed systems like UAVs and remotely controlled robots can take humans out of the line of fire, but technology can also reduce the strain of image analyzing.
“Today’s unmanned battlefields involve complex image processing, AI and video analytics. These processes include, but are not limited to image classification, image location and image segmentation,” says Mor. “Let’s examine a use case of bringing AI to the defense drone industry. Drones are used for intelligence, surveillance and reconnaissance. Usually drones record many hours of footage every mission—video footage that takes a long time for human analysts to analyze. While human analysts process footage, the ground situation may change, and a latency between analyzed footage and real-time battle conditions is presented.
Mor continues, “AI technology (deep learning process) enables the processing of much more data within the same timeframe, which will bring situational awareness to ‘near real time’ status. But it needs to be rugged and it needs to be reliable. Being able to use GPGPU-AI-based systems in the harshest environments gives system engineers the ability to forge new ground in rugged embedded computing. Aitech is focused on delivering this exceptional technology to our military, defense and space customers to use in their applications worldwide.”
Mor says that image classification, location, and segmentation are “perfect candidates for deploying NVIDIA deep learning inference networks, which can benefit from hundreds of parallel CUDA cores calculations.”
The DOD may be aiming to reduce the risk humans face on the battlefield with autonomous systems that can operate independently. Mercury’s Song says fully-autonomous systems may be commonplace in the battlefield of the future, but humans will remain in the loop in the meantime. In addition, Song says that commercial applications like urban air mobility (UAM) in electric vertical takeoff and landing (eVTOL) may help shape military technology.
“It’s kind of interesting that they want to go full autonomous, but I don’t think they could get there very quickly,” Song says of military technology aspirations. “I think all the projections that they’re going to go to autonomous is going to take a while. Meanwhile, it’s going to be semi-autonomous. So again, you need a very unique display requirement to ensure that the pilot can see. But even if it’s fully autonomous, I do think that you need to have passengers seeing what’s going on around the aircraft. So situational awareness is going to be very important to that. So there’s a lot of things going on the commercial side, which I think is going to take over the investment of the military side, whether it’s the display or the sensor fusion. So, things that the military have been trying to do for UAVs is 'sense and avoid.' But I think when it's the UAM that does this, they have to resolve 'sense and avoid,' even if it goes to a very simple semi-autonomous piloting. So, I think those are the two major commercial trends that’s going to flow over to military.”
In the field
Aitech’s Mor says that his company’s A179 ultra-small-form-factor GPGPU AI supercomputer enables flexible I/O and video capture to manage multiple data and graphics streams simultaneously. The A179 Lightning is a ruggedized, fanless computer that is roughly the size of a cell phone. The A179 is powered by NVIDIA Jetson Xavier NX based on the Volta GPU, which has as many as 384 CUDA cores and 48 Tensor cores.
Video capture is enabled with several input types that can use multiple video streams simultaneously. These include SDI (SD/HD), four FPD-Link III (to MIPI CSI) camera inputs and eight composite (NTSC/PAL) channels.
Standard I/O ports, such as Gigabit Ethernet, USB 3.0 and 2.0, DVI/HDMI out, CANbus, UART Serial and a number for discretes, offer flexibility in data management. The system also accommodates as many as two optional expansion modules (via factory configuration), such as additional I/O expansion modules or an optional NVMe SSD. The system allows for a removable Micro SD card and features 8 GB of LPDDR4x.
Curtiss-Wright’s Pollard says his company’s ruggedized vehicle displays “follow an open modular approach and are focused on the use of ruggedized consumer technologies to give the familiarity of the latest smart devices in today’s military systems.”
The GVDU rugged touchscreen displays are “plug and play” with most PCs, and the smartphone-like USB HID projective capacitive (PCAP) touchscreen allows wet and gloved fingers to operate it. The display is ruggedized to military standards like MIL-STD-810G, MIL-STD-461F, and MIL-STD-1275E.
Unlike smart phones, however, the GVDU is designed to be washed down. GVDU displays show clear, high-resolution images, even outdoors in high brightness light conditions, without reflections. The multi-touch touchscreen can be used in the rain and while wearing thick gloves.
The Curtiss-Wright GVDU’s internal embedded processor supports an enhanced set of data interfaces, including Ethernet, USB, RS-232/RS-422, and GPIO. The internal processor provides graphics input to the displays and receives all touchscreen and bezel button operations. The processor may be used to implement a map display, and function as a mission computer.
The GVDU military rugged displays are available in several sizes, including 10.4-inch, 12.1-inch, and 15.6-inch. Contact the factory for larger sizes, such as 17.3-inch and 21.5-inch. These SWaP-optimized displays are qualified by independent, accredited test facilities for compliance with industry standards.
Late last month, Mercury Systems announced 6U OpenVPX avionics embedded computing modules. Mercury officials say their company’s modules are the first safety-certifiable multi-core modules to use the latest Intel Xeon D-1700 processors, which were formerly code named Ice Lake D.
The modular, open system architecture design approach leverages BuiltSAFE elements with dual Xilinx Virtex UltraScale+ XCVU9P FPGAs, which have a reconfigurable framework to support real-time algorithms. There are PCI Express 3.0 interconnects and integrated 40 gigabit-per-second Ethernet.
The board supports packages such as VxWorks to achieve FAA CAST-32A objectives. The modules feature commercial-off-the-shelf (COTS) elements complete with hardware and software DO-254 and DO-178 artifacts to deliver performance and streamline subsystem development, integration and deployment.
Jamie Whitney
Jamie Whitney joined the staff of Military & Aerospace Electronics and Intelligent Aerospace. He brings seven years of print newspaper experience to the aerospace and defense electronics industry.
Whitney oversees editorial content for the Intelligent Aerospace Website, as well as produce news and features for Military & Aerospace Electronics, attend industry events, produce Webcasts, oversee print production of Military & Aerospace Electronics, and expand the Intelligent Aerospace and Military & Aerospace Electronics franchises with new and innovative content.