Rugged computers are the key to bringing advanced capabilities like artificial intelligence (AI), machine learning, and quantum sensing to military ships, aircraft, and land vehicles designed to operate at the edge of the battlefield.
A host of enabling technologies are coming to bear on rugged computers, such as ARM microprocessors, general-purpose graphics processing units (GPGPUs), and fast Ethernet networking. Nevertheless, the military computing industry has many design challenges to meet in size, weight, and power consumption (SWaP), thermal management and cooling, and tradeoffs between capability and operating environments before the best technologies can be unleashed.
As the military rugged computing industry moves forward, the most important technology trends, influential aerospace and defense applications, open-systems industry standards, and different approaches to meeting design needs will converge to move the industry forward.
Aerospace and defense applications
Talk to experts in military rugged computing, and inevitably what comes up is a discussion of artificial intelligence, and the computing resources necessary to carry out AI -- particularly on the edge of the battlefield.
"More than ever over the last five to ten years, we are seeing a bigger demand for edge processing -- the need for data processing at the edge, and the use of AI," says Dominic Perez, chief technology officer at the Curtiss-Wright Corp. Defense Solutions segment in Ashburn, Va. "As powerful a tool as AI is, it is extremely resource-intensive."
AI often is the first concern today when it comes to rugged military computing. "The DOD [U.S. Department of Defense] is highly focused on AI, machine autonomy, and uncrewed vehicles; that's where we see more and momentum. That's just naturally where the future is, says Aneesh Kothari, vice president of marketing for rugged computing specialist Systel in Sugar Land, Texas.
Kothari cites the DOD Replicator program as an example. Replicator seeks to develop relatively low-cost AI-equipped uncrewed vehicles to warfighters that are inexpensive enough not to cause big problems when they are lost or destroyed. U.S. military officials describe this quality as "attritable."
The first iteration of Replicator, called Replicator 1, was announced in August 2023 to deliver attritable autonomous systems at a scale of multiple thousands as early as August 2025.
Replicator 1 seeks to use large masses of uncrewed systems not only to put few people in the line of fire, but also unmanned vehicles that and can be changed, updated, or improved with short lead times.
Replicator program
Last September Defense Secretary Lloyd Austin announced the second iteration of Replicator, Replicator 2 to counter small uncrewed aerial systems at critical installations and force concentrations. Replicator 2 is to help overcome challenges in production capacity, technology innovation, authorities, policies, open-systems architectures, and systems integration.
Replicator seeks to strengthen collaboration between the Pentagon and commercial technology developers on developing inexpensive autonomous vehicles. More than 500 companies have participated in Replicator-1, and more than 30 have received contracts.
The drive to develop affordable AI, through programs like Replicator, "are definitely the big push for AI and autonomy," Kothari says. "AI and autonomy will be at every node of that network."
AI, and new generations of rugged computers that drive it, also will be at the heart of new initiatives to fuse sensors into a battlefield common operating picture, as well as efforts to move 5G wireless communications onto the battlefield.
"A common operating picture application is a new concept," says Curtiss-Wright's Perez. "The change is the wealth of sensors and information that can be pulled into the common operating picture. The challenge there is how to get the data fused into the common operating picture, with the right data formats, with data sanitized, and the whole human factors aspect."
Even the next generations of military wireless networked communications will rely on rugged computers and AI, Perez says. "The push for 5G on the battlefield really comes down to a lot of ruggedizing and processing."
Perez also points to the challenge of integrating individually held communications devices into the mobile ad-hoc networking of the future, referred to as MANET. This will involve meshing radios that can act as an extension to the radios that warfighters already carry.
Other AI and rugged computing-driven applications also involve the overall battle picture, says Austin Williams, product manager of rackmount computers at Systel. "You might have vehicle doing complex signals intelligence operations and taking action," Williams says. "That is a massive volume of data and processing. You need that horsepower to run it.
Enabling technologies
High-performance computing is pervasive today; the rise of the data center is perhaps the best example of that. Yet it's light years of difference between leading-edge data center computing and the kind of AI rugged computing envisioned for the battlefield's edge.
To put it bluntly, the battlefield is far from a controlled environment, as data center computing is. Battlefield computers must be rugged enough to withstand shock and vibration, temperature extremes, and careless operators. These kinds of rugged computers also must be size-, weight-, and power-efficient.
"Systel and other integrated computer manufacturing companies came in because you need ruggedized computers at the edge," says Systel's Kothari. "You need rugged edge hardware providing enabling capabilities for those networks. The big push from a computer level is to make products smaller and lighter, more rugged for harsh environments, and integrating higher-and higher-Wattage electronics."
Kothari points to today's data center-grade computing devices such as multicore microprocessors, field-programmable gate arrays (FPGAs), and GPGPUs that must be specially packaged and protected from the rigors of the battlefield to succeed in the rugged military computing market.
Take the Nvidia Jetson GPGPU architecture, which is becoming popular for high-performance rugged military computing. "The Jetson architecture is ARM based, and that is the preferable approach to these AI applications," says Sam Mata, product manager of embedded computers at Systel. "You do lots with it, and it tends to be a very good approach to ingesting and manipulating that data. These GPUs are great for AI because they can do a large amount of parallel processing for AI training. It does high-end processing before it talks to the CPU. Very valuable for AI training."
Systel offers two products for battlefield AI that use Nvidia Jetson technology: the Kite Strike and Sparrow Strike rugged computers. "Those are ready to go to get trained by certain algorithms for the sensors responsible for taking in that data," Mata says. The smaller of the two, Sparrow Strike, weighs slightly more than two pounds, and is being designed into a small unmanned aircraft, Mata says. "It's an on-board mission computer on a small UAV on a next-generation defense program. It's a very compact, lightweight, and highly rugged product."
Performance demands
The need for enhanced rugged computing seemingly is unending. "AI at the edge we can help with," says Curtiss-Wright's Perez. "Sensors can pull in a tremendous amount of data, and the resolution of these sensors is an order of magnitude of what they were previously; that's an order of magnitude increase in compute power. Computer performance will keep marching up, and challenging those who need to ruggedize it."
There's no end in sight for demands for more computing power. "Generally, data is king," says Systel's Williams. "If you only have basic computers out in the field, Nvidia can help make actionable decisions at the soldier level, without spending weeks to find out what's going on.
Yet the challenges of high-performance rugged computers in the field persist -- especially for today's computers. The chips keep getting hotter," Williams says. "On rackmount side, you are generating a lot more heat, and your system can throttle. We're also getting into higher vibration and shock."
Design issues
So what are the best approaches for moving commercially developed rugged computer technology from the data center to the field? "We have to play the SWaP balancing game between capabilities and needs, and deal with electromagnetic signatures and thermal signature, as we go into the field," says Curtiss-Wright's Perez.
It's not really a technical issue of sending a supercomputer to the field; people have been doing that for a while," Perez continues. "But supercomputers generate a great deal of heat, and has an EMI [electromagnetic interference] footprint. The challenge is to balance what capabilities you can and should have at the edge, and what can you do if you are denied communications at the edge. Performance per Watt is the name of the game."
There's little choice these days, other than ruggedizing commercially developed technology, rather than developing rugged computers from the ground up. "We've really flipped the script in the last 50 years to where most research funding was sponsored by the federal government; now it's a small fraction," Perez explains. "Industry today is driving what technology is available, and they don't give much thought to how that technology moves to the field."
Heat and cooling are crucial considerations for edge computing, where they're given little thought in the climate-controlled data center. "Not all of these high-end processors can handle that temperature range without a reset in the middle," Perez points out. "On a mission computer, you can't be resetting them all in-situ. We are going to see more and more of this as die sizes decrease and signal processing increases."
Some of the design tradeoffs go deeper than just making technology choices, Perez says. "We still are really learning about how much data we can shove in the brain of a human. We as a community are leaning together what is the most advantageous to put in front of a warfighter. Is it on a screen in a vehicle, or worn on a human? You can't give a human everything at once, and what are the requirements that we need to push it to the edge?"
The role of industry standards
Today's open-systems standards offer to help rugged computer designers respond quickly to demand for enhanced performance and rapid technology insertion. Chief among these standards and design guidelines are the Modular Open Systems Approach (MOSA), and the Sensor Open System Architecture (SOSA) standard. Both seek to reduce or eliminate vendor lock on contractors and projects, support interoperability among different vendors, and facilitate rapid upgrades through technology insertion.
We've been a strong advocate of the SOSA working group," says director of product strategy at rugged computing specialist Concurrent Technologies plc in Colchester, England. "The change they have made toward a small number of open standards for plug-in devices means there is little risk to the primes adopting a product from Concurrent, because if we fail, they easily can buy from other suppliers."
SOSA and MOSA represent big milestones in open-systems design. "That's a big change in the opportunities that we can get into," Forrester says. "We introduced a new product last year, and had a requirement to deliver a significant number of those products at the end of 2024. We went from the product being a concept design to one that was fully SOSA-aligned. We were able to deliver several hundred of those products, which is significantly different over what we could do five years ago. The SOSA dividend is really starting to pay-off."
The next big change where open standards are concerned will be VITA 100, which will offer SOSA-aligned computer cards in 3U, 4U, and 6U sizes, Forrester says. VITA 100 is expected to receive American National Standards Institute (ANSI) approval next year.
The expected 4U size of VITA 100 boards will have the real estate necessary to accommodate the large sizes and bandwidth requirements of current and future integrated circuits. The chipsets we are provided by Intel, Nvidia, and AMD/Xilinx today are too big, and too power hungry to cope in a 3U form factor, Forrester says. "We have customers telling us they need more performance, and that can be impossible in a 3U form factor."

John Keller | Editor-in-Chief
John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.