Rugged computers look to the data center

Jan. 19, 2016
Virtual-machine technology, fast interconnects, innovative thermal-management techniques, and modular architectures bring data-center power to embedded computing.

Virtual-machine technology, fast interconnects, innovative thermal-management techniques, and modular architectures bring data-center power to embedded computing.

Rugged computers for aerospace and defense applications have come a long way from the days of heavy boxes that could be dropped off the backs of trucks, run over in the mud, and then put back into operation as if nothing bad had ever happened.

Make no mistake: tough, rugged designs are just as important now as they've ever been - the military still demands drop-in-the-mud computers - yet today's rugged computers increasingly are taking lessons from sophisticated server computing, with fast interconnects, virtual-machine technology, and open-systems modular architectures rolled into tough mobile machines that would be just as at home in the back of a Humvee as they would in the data center.

Some issues involved in military rugged computing are just as important today as they were years ago, such as small size, weight, power, and cost (SWaP-C), and innovative electronics cooling approaches. Small powerful processors such as the Intel Xeon-D and even full Xeon processors are pushing thermal management schemes to the limit, and the rise of wearable computing, unmanned vehicles, and other SWaP-C-constrained applications demands computers that are smaller and more powerful than ever before.

The Aitech RediBuilt A172 rugged computer is developed around a standard Type 6 COM Express module, the Intel Core i7 processor, an industry-standard pin-out, and the ability to support several processor options.

Need for SWaP

"The major keyword is SWaP-C optimization," says Herve Garchette, business development manager at Creative Electronic Systems (CES) in Geneva, Switzerland. "SWaP-C means more and more complex systems integration of high-performance computing, systems on chips, and optical interfaces."

Often the notion of reducing cost, size, and weight run hand in hand. "Everyone is trying to get more features into a smaller form factor using SWaP-C methodology," says Jason Shields, product manager for 3U VPX systems at Curtiss-Wright Defense Solutions in Santa Clarita, Calif. "We are seeing a lot of consolidating of features; on current platforms we may have several functions in one platform. We are seeing a major con- solidation effort; it reduces power and size with the reduction in cabling."

Inevitably, however, the tradeoff with SWaP-C typically involves hotter and hotter operating temperatures, which puts pressure on systems designers to come up with ever-more innovative cooling techniques. "Everybody is dealing with more and more compute density. We used to struggle to get 40 to 50 watts per slot in the old VME days. Now we have 50 watts at the CPU in 3U VPX," says David Pepper, product manager of core computing at Abaco Systems (formerly GE Intelligent Platforms) in Huntsville, Ala.

"People are putting more and more compute density in smaller and smaller space," Pepper continues. "We can have boards at the 3U form factor that are approaching 100 watts. Cooling is a pretty big challenge, and people are starting to look at whether conduction-cooled solutions are adequate, and asking if they might need something more exotic. Do we need more air-flow-through and liquid-flow-through cooling to handle the heat? Will the cooling technologies we have today accommodate the silicon of tomorrow?"

The answers to those questions can be elusive, and may involve not only new innovations in conduction, convection, and liquid cooling, but also new materials to better conduct heat away from critical components. "Heat management is very critical," says CES's Garchette. "Heat management can involve new materials; it is hard to follow some generic strategy."

The Themis Computer Hyper-Unity all-flash, hyper-converged, scalable rugged server computer infrastructure is for compute-intensive military and aerospace virtualized rackmount applications.

Getting the heat out

Dealing with heat is a central design issue when it comes to today's rugged high-performance computers. Heat essentially is a consequence of shrinking processor and circuit card size while improving computing performance. With more heat generated in a smaller area, techniques for getting the heat out have become critical parts of the design process.

Typically rugged computers for military applications use conduction cooling, which conducts heat away from hot components like processors over the circuit card, through the card edges, and through the enclosure to the outside air. Designers use a variety of techniques for conduction cooling, including conductive heat pipes, which act as efficient channels to conduct heat quickly over the card to the card edges. Convective cooling also can be effective as fans move heat over cards and through enclosures to the outside air. Fans can be a problem, however, as they represent a single point of failure in the design. Sometimes more drastic measures are necessary.

When conduction and convection are inadequate for removing heat from high-performance rugged computers, designers often resort to liquid flow-through cooling, which channels heat through liquid that flows through the circuit cards and chassis. While effective, liquid flow-through cooling can add expense and weight to a design.

Sometimes even more exotic thermal-management techniques are necessary, including refrigeration in which chassis are air-conditioned. Designers at General Micro Systems (GMS) in Rancho Cucamonga, Calif., are taking an entirely different approach with RuggedCool computers.

GMS engineers are using the full server-class Intel Xeon processor in the RuggedCool line which can generate heat as high as 135 watts, explains Ben Sharfi, CEO of General Micro Systems. "Cooling of the processor gets significantly more difficult when you deal with 135-watt processors," Sharfi says.

GMS designers use a thermal- management technique that mounts the processor to a copper plate, and floats the copper plate and processor in a tub of liquid silver. "The heat from the processor dissipates through the copper plate, which is suspended in a liquid-silver chamber," Sharfi says. "The silver melts and makes a perfect-thickness media, and next to gold, silver is the best thing for transferring heat."

An added benefit of this approach is the processor's ability to withstand the effects of shock and vibration. "Our shock resistance jumps up to 160 Gs because the processor is never touching the case," Sharfi says.

Today's military rugged high-performance computers are taking a modular design approach to facilitate future system upgrades and technology insertion.

Moving away from standards

With all its benefits, there's a price to be paid for such a design, and the biggest one is the cost. "It is very expensive and very messy," Sharfi says. In addition, the RuggedCool approach represents a custom design, which many call into question in this era of open-systems standards.

A move away from open-systems standards is fine with Sharfi. "There are no standards in today's market in any platform," Sharfi declares. "There is no interchangeable standard that anybody can claim that is in the market today. It isn't VPX; VME was the last platform that did that. No two manufacturers use the same number of pinouts and lanes for VPX; the only standard is where the power pins are. Everything is custom; it's a single-manufacturer architecture."

While there is a noticeable move away from some open-systems standards in today's high-performance rugged computing designs, it's nothing like a wholesale rejection of standards that embraces full-custom designs - far from it.

Design trends involve standard interfaces from computer box to computer box, but less of an emphasis on interchangeable standards inside the box. "There's a trend in the market to look at an LRU [line-replaceable unit] as a boxed solution, and not based on a particular standard on the inside," says Mike Southworth, product marketing manager for small-form-factor systems at Curtiss-Wright Defense Solutions in Salt Lake City. "The trend we are seeing is customers are not tied into a specific standard or architecture." Some are even considering removing computing electronics from the enclosure altogether to save on size and weight.

No NRE required

A re-evaluation of rugged computer design that considers computing enclosures as building blocks with standard interfaces but with no particular adherence to standards inside the box presents some opportunities for budget-conscious customers in the aerospace and defense industry who are reluctant to pay non-recurring engineering (NRE) costs.

"Because of budget constraints there is a reluctance to having funding approved for custom solutions, and lack of desire to pay for NRE," Southworth says. "Curtiss-Wright has a modified COTS [commercial off-the-shelf] business model to take modular systems and integrate off-the-shelf I/O modules without NRE costs."

This design approach also tends to be forward-looking because relying on industry-standard interfaces can help accommodate systems upgrades and technology insertion in the future. "There is no NRE today or in the future," says Curtiss-Wright's Shields. "We are designing for that flexibility so that when new boards come out on the market, designers can upgrade their systems without NRE."

Designers at Aitech Defense Systems in Chatsworth, Calif., take a no-NRE design approach and call it Redi- Built. "Going back a couple years, Aitech introduced the concept of RediBuilt that came to market in a couple different form factors to offer something to the customer that doesn't require any NRE," says Doug Patterson, vice president of the military & aerospace business sector at Aitech.

"The customer gets a box with cables for all his I/O on two 128-pin circular connectors," Patterson explains. "He gets his box with Intel- or PowerPC-based platforms, all configured, all his drivers, all done. All the customer has to do is put in his Ethernet address and go."

Aitech products in this category revolve around the company's RediBuilt A190 and A172 rugged computers. "These are fairly complex systems," Patterson says. "This can be used as a main mission computer for a large manned or unmanned aircraft. For example, we delivered an Intel-based solution and dropped it off with the customer. He loaded the program he had created on his laptop computer, and it all worked."

The General Micro Systems SB1102-HDVR Eagle is a small-form-factor rugged video recorder and workstation processor able to capture four independent HD-SDI 1080p video channels at 60 frames per second.

The smaller RediBuilt A172 rugged computer is being designed into unmanned ground vehicles - particularly one for airports that functions as an unmanned tug that tows aircraft from the gate to the runway to save on the plane wasting fuel while idling and taxiing.

"The already-built idea came into fruition because we were getting a lot of pushback from customers saying give us less NRE," Patterson says. "It was just about budgets as everything was sequestered. That rippled through the defense contractor base."

The RediBuilt approach enables designers to replace boards inside the box if they want, or customers can ask Aitech engineers to make the alteration. "Customers still want to add stuff, but they want to do it cheaply," Patterson says. "We put in the COM Express port for that, or add wireless to the module. Using Linux or Windows, people can work right out of the chute."

Virtualization in rugged computing

The need to accommodate legacy software in modern military computing architectures, together with the imperative to shrink the size and weight of computer hardware, are giving rise to the use of virtual machine technology in high-performance rugged computing.

Virtual machine refers to the ability to emulate one or more different kinds of computers on one architecture. Virtual machines operate based on the computer architecture and functions of a real or hypothetical computer, and their implementations may involve specialized hardware, software, or a combination of both.

"Everybody accepts that the virtual machine is key," Sharfi says. "A lot of imagery, payloads, and night-vision systems are written in Windows XP. You cannot just go run them on the latest multicore Intel i7 or Xeon processors. Microsoft does not support XP anymore, and the new hardware does not support those old drivers."

Virtualization technology, however, enables systems designers to run military software written in the 1980s alongside newly written software in the same system. "It takes the driver issue out," Sharfi says. "You take the image of a system you have from 30 years ago."

3U VPX single-board computers like the Abaco Systems SBC328 are building blocks for many of today's rugged military high-performance computers.

Virtual machine technology enables designers of large and complex computers for military surveillance and reconnaissance to size them appropriately for tight applications like unmanned vehicles, says Rick Studley, chief technology officer at Themis Computer in Fremont, Calif. "We are seeing much more virtualization, and collapsing the whole infrastructure to the next level of integration. We see a big embrace of hyper-convergence; it's like taking a data center and converting it into a small box. By adding more boxes to it you can grow your data center.

Interconnect challenges

Today's military rugged computers are pushing the bounds of data interconnects and I/O such that the industry as a whole may have to re-evaluate the need for optical interconnects to replace copper interconnects.

Fourth-generation PCI Express, which will move data as fast as 16 gigabaud, is expected to stabilize around 2017. "Can we accommodate this with our electronic backplanes, or is that the tipping point that will drive us to all-optical backplanes?" asks Abaco's Pepper. "We usually can find a way to do it, and we will keep trying to push that boundary, but in 2017 or 2018 we may have to face a change. We might not be able to go to the next step."

About the Author

John Keller | Editor

John Keller is editor-in-chief of Military & Aerospace Electronics magazine, which provides extensive coverage and analysis of enabling electronic and optoelectronic technologies in military, space, and commercial aviation applications. A member of the Military & Aerospace Electronics staff since the magazine's founding in 1989, Mr. Keller took over as chief editor in 1995.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!