Dilemma: Databus or switched fabric?

Feb. 1, 2005
Single-board computers rely on fabrics for speed, not stability, which presents designers of data-intensive, interrupt-driven, hard-real-time systems with a raft of difficult decisions.

Single-board computers rely on fabrics for speed, not stability, which presents designers of data-intensive, interrupt-driven, hard-real-time systems with a raft of difficult decisions.

Today’s soldier carries more computing power on his belt than his father could load in the back of a Jeep. Sensors, meanwhile, gather data around the clock, and unmanned vehicles steer themselves through entire missions. It falls to the engineer to build single-board computers and mezzanine-board computers that can handle this challenge.

Board designers today face a new performance bottleneck: modern processors are so fast that traditional parallel databuses cannot keep them adequately supplied with data to take advantage of their blazing speed.

The CHAMP AV-IV from Curtiss-Wright Controls Embedded Computing is a quad 1.5-gigahertz PowerPC VME board for digital signal processing.

Click here to enlarge image

The answer might be switched-­serial interconnects, an emerging family of high-speed networks, or “fabrics,” capable of moving a vast amount of data among components on the board. Still, designers complain that these developing technologies are not yet reliable or predictable enough for battlefield use.

Many chipmakers are not waiting on the sidelines; they are giving in to intense pressure to pick sides, so microprocessor brands each will use a different fabric.

Power efficiency also will drive the choice because designers must choose between low-wattage systems for wireless and mobile tasks, or high-powered systems for demanding processing of data-intensive signals from radar and sonar sensors.

“You’re going to see a lot more talk than shipments of fabrics,” says Ray Alderman, executive director of the VMEbus International Trade Association (VITA) in Fountain Hills, Ariz. “2005 will be the year of reality; stuff that’s overly hyped will bite people who got too far down the pipeline with it.”

That’s because fabrics rely on software to run smoothly, and that code has not been written yet. Fabrics are 10 percent hardware and 90 percent software, but the world hasn’t seen that yet, he says.

In the meantime, military designers will stick with what works.

“Fabrics cannot do what buses do. They can’t do deterministic, real-time applications, and their latencies will always be higher. For hard, real-time apps, VME will be around for 20 years or better,” Alderman says.

Designers have to crawl and walk with fabrics before they can run with them, Alderman says. People will not upgrade as fabrics evolve from 2 to 5 gigabytes per second; they will wait until fabrics can operate reliably at 10 gigabytes.

While designers wait, those growing fabric standards will fragment into market shares throughout the electronics industry, experts predict.

Computer makers want their products to be unique, not commodities, so each will adopt a different fabric to stay incompatible with the competition, Alderman says. In the telecommunications market, switch and router manufacturers will adopt different fabrics to stay incompatible with each other. In the industrial and commercial market, server manufacturers will do the same.

Three fabric categories

Alderman says that, as they move ahead, manufacturers will choose fabrics from three categories:

• a “tightly coupled, shared-everything” system, which is deterministic and hard-real-time, like VME and RapidIO, that enables all its processors to tap all its resources;

• a “snugly coupled, shared-something” system, which runs in soft real time and requires its components to share disk or memory space, such as Infiniband, StarFabric, and PCI Express Advanced Switching; and

• a “loosely coupled, shared-nothing” system such as Ethernet, in which each processor has its own operating system and resources, and that supports very little relationship between the board and box.

The latter’s extreme independence spells trouble for Ethernet, Alderman warns. “One-gigabit Ethernet is an electronic version of cancer,” he quips. “It takes 1 gigahertz of processing power to move 1 gigabit of data, and the protocol overhead is 70 percent of the processing requirement.”

In contrast, Infiniband uses remote direct memory access (RDMA) for memory-to-memory transfer between boards without taxing the microprocessor, requiring only 10 percent overhead, Alderman says.

In September 2003, scientists at Virginia Tech in Blacksburg, Va., collected 1,100 dual-processor PowerPC G5-powered Macintosh PCs running the Linux operating system, and tied them together with 24 high-speed Infiniband switches from Mellanox Technologies in Santa Clara, Calif.

The resulting “Big Mac” supercomputer ranks among the top five in the world, despite a cost of only $5 million. Researchers use the cluster to examine nano­scale electronics, chemistry, aerodynamics, molecular statics, computational acoustics, and molecular modeling.

This supercomputer has its own meaning to computer scientists, Alderman says. It made people notice that Infiniband has a niche application in super­computing, and its low latency could also be used for embedded applications in radar and sonar arrays, he says. That is why planners chose Infiniband for the new VXS board standard.

Still, Alderman has not given up on Ethernet yet.

Researchers at IEEE’s 802.3 committee are putting RDMA into 10 Gigabit Ethernet, and finding a way to run that over a pair of copper backplane wires. Those improvements would move the fabric squarely into Alderman’s second category, and qualify it for soft-real-time, embedded applications, he says.

Simple jobs keep Ethernet popular

Three years ago, planners at the PCI Industrial Computing Manufacturers Group (PICMG) in Wakefield, Mass., saw the newest processors running faster than parallel backplane databuses could feed them data. As a result, PICMG created the 2.16 standard for running Ethernet over the backplane.

Other fabric options have surfaced since then, and military designers are scrambling to predict which will prevail. In fact, that choice might be decided more by marketing than by technology, says Joe Pavlat, president of PICMG.

Processor manufacturers are trying to preserve market share by choosing different fabric standards. That is why PowerPC chips from FreeScale Semiconductor of Austin, Texas (spun off from Motorola), will use Serial RapidIO, and Intel’s Pentium family will use PCI Express, he says.

“The switched-serial interconnects are falling along the CPU-manufacturer boundaries. That’s another reason for Ethernet: because it’s the only remaining ubiquitous switched-serial interface,” Pavlat says.

Before PCI, everyone used his own private bus designs, he says. PCI was the only ubiquitous databus the industry had ever seen, and now that’s going away; Serial RapidIO will never interface with PCI Express.

Still, those options comprise just 20 percent of the market. “The switched-­serial fabric that powers 80 percent of all transactions on the planet is Ethernet. That’s because Ethernet is good enough, it’s cheap enough, and people understand it,” Pavlat says.

Military designers are loath to use Ethernet because its high software-­protocol overhead leads to latency and poor determinism, Pavlat admits. In comparison, choices such as PCI Express, StarFabric, and Serial RapidIO are more deterministic because they run with known latency and known jitter.

Still, Ethernet is not a lost cause. Engineers are trying to fix its overhead problem with TCP/IP offload engines, called TOEs. As the standard grows toward 10 Gigabit Ethernet, its pure speed will overwhelm those shortcomings. Pavlat cites the Internet telephone network called Voice over Internet Protocol (VoIP) as proof that Ethernet can produce dependable timing.

If Ethernet can handle Internet telephony, it can handle battlefield communications, short of real-time tasks, such as fire control and avionics, he says.

“New military technologies like ­WIN-T and Net-Centric Warfare are just moving voice, video, and radar data. Those are high-density communications interfaces, so they will need Ethernet, too. You will be trading battlefield pictures instead of American Idol, but it’s still video,” Pavlat says.

Computer makers are also picking sides, as Dell began shipping its PCs in 2004 with PCI Express instead of the usual PCI, he said. Most users will never know they are using a switched-serial interconnect instead of a parallel bus, and Dell will reap the technical advantages.

Parallel buses such as PCI and VME will fail if one board in the system fails, but switched-serial systems can work around those bad components. That flexibility will also help to insulate military designers from parts obsolescence, Pavlat predicts.

Fabrics also suffer less than do parallel buses from electromagnetic interference (EMI). Switched-serial interconnects use differential signaling, making them noise-immune, and they use low voltages, so they also produce less noise.

Whoever wins the fabric wars, PICMG planners will stay flexible. Now the group is pushing fabrics into another corner of the industry by evolving beyond the PCI Mezzanine Card (PMC). The new module is the AMC - short for Advanced Mezzanine Card - which is “a PMC module on steroids,” Pavlat says. The AMC standard describes a hot-swappable mezzanine card with versions to accommodate Ethernet, PCI Express, or RapidIO.

Market share shifts

Among bus architectures, the VME standard has a large advantage in market share compared to Compact PCI, but its lead is shrinking, says Eric Gulliksen, embedded hardware group manager at Venture Development Corp. (VDC) in Natick, Mass.

Curtiss-Wright Controls Embedded Computing makes this 6U VITA 46 board.

Click here to enlarge image

VDC researchers performed a market survey in April 2004, measuring manufacturers’ sales numbers for all types of electronics single-board computers, I/O cards, digital signal processing (DSP) boards, graphics, networking, backplanes, mass storage, and others to the North American military and aerospace commercial off-the-shelf (COTS) market.

In 2003, the split was $291.4 million for VME compared to $41.8 million for Compact PCI. Both bus architectures are predicted to grow in the coming years, but Compact PCI will grow much faster. Predicted sales for 2008 will reach $315.7 million for VME and $82.8 million for Compact PCI, VDC researchers say.

Sales in Western Europe show the same trend, although the market is about one-fourth of the size. In both regions, single-board computers alone represent roughly half the total market of electronic devices.

“So, there is a trend toward Compact PCI away from VME,” Gulliksen says. That migration will happen faster in the rest of the world than in the U.S.

The SVME/DVM 680 SwitchBlade from Curtiss-Wright Controls Embedded Computing enables designers to interconnect chassis, cards, and processors through Gigabit Ethernet links.
Click here to enlarge image

“The reason is we’re at war. Field commanders don’t want to make technology changes and go to war with it,” Gulliksen says. “Compact PCI offers some size advantage and some cost advantage, so it will go into some naval vessels and aircraft, but not in a wholesale way until the war is over.”

Another reason the rest of the world will adopt Compact PCI more quickly is that they are buying new electronics, not replacing current gear. That means they do not have the huge installed base of VME acting as market-share inertia, Gulliksen says.

Military designers are still slow to adopt fabrics. Of the $41.8 million worth of Compact PCI devices shipped to the North American military and aerospace market in 2003, 92.6 percent were not fabric-enabled, VDC researchers say. Just 6.9 percent used PICMG 2.16, the Ethernet standard, and 0.3 percent used other fabrics, including PICMG 2.17, the StarFabric standard.

Eventually, fabric use will rise quickly. Of the $58 million market predicted for Compact PCI in 2005, the share of non-fabric devices will fall to 89 percent, with PICMG 2.16 rising to 9.7 percent, according to VDC figures.

Board makers stay flexible

Military planners are seeking ways to build network connectivity throughout the battlespace, including the Global Information Grid (GIG) in the sky.

To meet that goal, electronics engineers will have to design each single-board computer and node with its own Internet Protocol (IP) address, says John Wemekamp, chief technology officer for Curtiss-Wright Controls Embedded Computing in Leesburg, Va.

Fortunately, the trend of electronics miniaturization means single-board computers are becoming single-board systems on a card, using several processors, onboard memory, high-speed I/O, and switched-fabric connections.

In turn, Curtiss-Wright makes a quad PowerPC card for the signal processing market. Boeing uses this product for its Operational Flight Program (OFP), running four operating systems and four applications on one board, Wemekamp says.

The StarSwitch is Radstone’s StarFabric switch and PMC carrier.
Click here to enlarge image

Military designers are using single-board computers for DSP because they need the extra horsepower for jobs such as sensor processing, data fusion, relaying information to the right operator, and autonomous operations of unmanned vehicles.

“They need more MIPS,” Wemekamp says, referring to computer speed measured by millions of instructions per second. “No matter how much we give them, they want more.”

These fast computing speeds demand faster connections than traditional VME and Compact PCI. So, single-board computer makers are eagerly awaiting VITA 46, the emerging standard for high-speed serial interconnects as board I/O, he says. Likewise, VITA 42 (XMC) will provide a new standard for switched mezzanine cards, adding connectors to enable more gigahertz.

Already, notebook and desktop makers are transitioning from parallel buses to PCI Express, as are graphics-accelerator chip makers such as 3Dlabs, ATI, and NVIDIA, he says. Other options for badly needed high-speed interconnects include Serial RapidIO and StarFabric.

As hardware makers build these choices into their products, designers will have to pick fabrics to match their mezzanines and processors. And given their small market share, designers of military products probably will not drive that choice, but rather, will hang off the coattails of the commercial world, he says.

Regardless of those choices, options such as PCI Express Advanced Switching and Rapid IO will be available for years, so Curtiss-Wright board designers will produce electronics flexible enough for any option.

“We’ll try to stay fabric-agnostic, using a middleware software layer to protect our customers’ investments and allow them to migrate,” Wemekamp says.

Customers demand flexibility

“We often have to upgrade to use fewer boxes (LRUs), and also support legacy interfaces. So, even our latest single-board computers need 1553 ports on base cards,” Wemekamp says.

At the same time, designers of new systems such as the Army’s Future Combat System are looking at emerging standards such as Gigabit Ethernet, USB, Serial ATA, and new switch fabrics as well. Fitting all those options on a card forces Curtiss-Wright designers to confront thermal challenges, with rising watts per card.

Future single-board computers must be tailored for specific applications, such as running a processor slowly if the board gets too hot. “People are concerned about power; they can’t cool it. We can provide enough horsepower to shrink from six to two LRUs, but the boards get too hot. So, people usually go with the thermal limit because they have more horsepower than they need anyway,” Wemekamp says.

Here comes the heat

Thermal management is the next big frontier for single-board computers, agrees PICMG’s Pavlat.

“Things are moving from parallel databuses to switched-serial interfaces, and that’s a good thing,” he says. “But cooling is the next major engineering challenge we’ll face. It’s already started; in April 2004, Apple started shipping the Power Mac G5, the first commercial ­liquid-cooled product.”

The one problem facing all industries -military, industrial, and telecommunications - involves the upward trend in power density, and the increasing heat that results in multicore processors. Designers will be forced to move beyond air-cooled electronics, and begin to use liquid cooling. Designs such as the Advanced Telecom Computing Architecture (ATCA) have already pushed air-cooled electronics to the limit.

“We spent the ’90s getting faster, and now we have to figure out how to manage the heat,” Pavlat says. We’re at the stage of saying that’s great, a 150-watt processor. Now what do I do with it?” he says. “You could use a box full of air-cooled 30-watt processors, or use two liquid-cooled 150-watt processors, which may actually be cheaper.”

Fabrics run today

High-speed fabric interconnects are not just potential plans; they are being deployed today. “Fabric architecture has been critical for recent design wins, particularly for signal-processing applications,” says David Compston, director of marketing for Radstone Technology in Woodcliff Lake, N.J. “So, connectivity and high-bandwidth interconnects are where we’ve been focusing.”

In the past, military designers with high-bandwidth requirements had to use proprietary backplane interconnects. Today they are looking at StarFabric, Compston says. Another popular option is PCI Express Advanced Switching, still under development by engineers at StarGen in Marlborough, Mass.

Radstone leaders plan to launch a StarFabric switch in early 2005, intended for programs such as Apache Block 3, and various applications in naval radar, ground-mobile radar, and mine detection.

At the same time that military applications are getting faster, they are getting smaller. “6U is usually where we see state-of-the-art processing, but we’re now seeing requirements for smaller, more integrated systems with full capabilities, driven by the market for unmanned vehicles and the need to reduce power and reduce space,” he says.

Taken together, these trends present an engineering challenge - a fast, small computer creates heat, but Radstone engineers have found an advantage in dissipating heat from the 3U size because the processors are located closer to the sidewalls than they would be in a 6U box.

Single-board computers shrink onto one chip

As new technologies provide better computers, military requirements are growing even faster, says Craig Lund, chief technology officer for Mercury Computer Systems in Chelmsford, Mass.

High-performance applications include sensors that collect high-volume data streams, multimission computing for every task, and cramming compute power into constrained environments as applications move closer to sensors.

The solution to all those engineering challenges is switch-fabric-based architectures running with multiple processor configurations, he says. Multiprocessor chips are a near-term reality, but they demand complex software to manage the raw speed.

Another approach is “system-on-a-chip” (SOC) technology, which is quickly beginning to include much of the functionality now found on a traditional single-board computer. Military systems that used COTS single-board computers do not always need those boards anymore; they can simply stick an SOC chip into the corner of some other board in the system, Lund says.

Because of their small size, SOCs soon could proliferate across the system as super-intelligent I/O controllers, also handling other functions that previously required more, application-­specific devices.

Fabrics are crucial here, too. Instead of the buses that connect a processor to peripheral chips on today’s single-board computers, such a sea of SOCs requires the high-speed, peer-to-peer connections of a switch fabric, Lund says.

This is not to say that single-board computers will disappear. A market still exists for these modules using the highest-performance processors, which would generate too many watts to fit on another board such as the SOC.

Fabrics boost efficiency

Designers at Analog Devices Inc. (ADI) in Norwood, Mass., will include a fabric port on their TS301 TigerSHARC DSP next year, says Michael Long, the company’s strategic marketing manager for DSP. Still, he insists chip manufacturers cannot blaze the trail alone; COTS board manufacturers will have to support fabric standards, too. That is one reason that ADI designers will build a variant of the TigerSHARC to support three different fabrics: Serial RapidIO, PCI Express, and Gigabit Ethernet.

Users choose RapidIO for its efficiency, he says; the standard offers strong performance per square inch as opposed to cranking out absolute watts. Applications such as radar, sonar, and missile tracking would have much higher wattage budgets for the board than a wireless application.

The challenge is how to perform fast Fourier transforms (FFT) and DSP with both power and efficiency. That task is complicated because existing solutions cannot keep the processing core fed with data from off-chip data storage.

The Momentum Computer Cheetah-Cr, from Mercury Computer Systems, is an Intel Pentium M processor-based CompactPCI single-board computer in a 6U form factor.
Click here to enlarge image

ADI designers cope by including enough integrated memory on board to do FFT without fetching data from off-chip, Long says.

They also have moved from efficient SRAM memory to embedded DRAM, thus integrating four times the memory in the same embedded area - jumping from 6 to 24 megabytes. DRAM also needs less power, and has a smaller error rate.

The problem is even worse for wireless applications, which must store massive amounts of data in antennas, creating problems with throughput and latency.

Fabrics can be a solution, moving data from device to device or node to node. But the high overhead of some fabrics means they would work better as a backplane between cards in a rackmount than between devices on a board.

Today, many users choose PCI Express for commercial applications and Serial RapidIO for military, aerospace, and communications, Long says.

That division is as much from force of habit as for technical reasons, he says. Intel’s backing of PCI Express has pushed it into many consumer applications. And military designers envy the pure multigigahertz clock speed of Pentium chips, but cannot support their power and heat requirements.

Hardware pushes fabrics to market

The RapidIO interconnect is well suited for military and aerospace applications, agrees Andrew Bunsick, product marketing manager for Altera Corp. in Kanata, Ontario.

That is because RapidIO was developed specifically as a high-performance, packet-switched interconnect technology, designed to pass data and control information between microprocessors, DSPs, communications and network processors, system memories, and peripheral devices, he says.

It is also a good match because it offers a common interconnect protocol for host and control processors and DSPs. And it provides scalability through its point-to-point I/O technology and switch-based architecture.

All industries are slow to adopt new interconnects, largely because hardware manufacturers are slow to provide complementary products, Bunsick says.

Manufacturers of Application Specific Standard Products (ASSPs) have already released RapidIO switches and plan to release Serial RapidIO switches in early 2005. However, these switches provide a fixed number of RapidIO ports, not tailored to users’ systems requirements. So, many users could need multiple devices to handle their switching and bridging requirements.

One solution is the field-programmable gate array, which offers the ability to bridge from RapidIO to anything, to support any number of switch ports, and to deliver any DSP function, he says.

Company information

Acromag Inc. Wixom, Mich. www.acromag.com
Ampro Computers San Jose, Calif. www.ampro.com
Carlo Gavazzi Mupac Inc. Brockton, Mass. www.carlogavazzi.com
Electronic Packaging Crystal Group Inc. Hiawatha, Iowa www.crystalpc.com
Curtiss-Wright Controls Leesburg, Va. www.dy4.com
Embedded Computing Diversified Technology Ridgeland, Miss. www.dtims.com
DNA Computing Solutions Richardson, Texas www.dnacomputingsolutions.com
Elma Bustronic Fremont, Calif. www.bustronic.com
GE Fanuc Embedded Systems Ventura, Calif.
www.geindustrial.com/cwc/gefanuc/embedded/
General Micro Systems Rancho Cucamonga, Calif. www.gms4vme.com
Lockheed Martin Systems Integration Owego, N.Y. www.lockheedmartin.com/si
Macrolink Anaheim, Calif. www.macrolink.com
Maxwell Technologies San Diego, Calif. www.maxwell.com
MEN Micro USA Carrollton, Texas www.men.de
Mercury Computer Systems Chelmsford, Mass. www.mc.com
Motorola Embedded Communcations Tempe, Ariz. www.motorola.com/computers
Nallatech Inc. Eldersburg, Md. www.nallatech.com
North Atlantic Industries Inc. Bohemia, N.Y. www.naii.com
Parvus Salt Lake City, Utah www.parvus.com
Pentek Upper Saddle River, N.J. www.pentek.com
Radstone Technology Towcester, England www.radstone.com
Sarsen Technology Marlborough, England
www.sarsen.net/sarsen-manufacturer-bittware.html

SBS Technologies Raleigh, N.C. www.sbs.com
Sky Computers Peabody, Mass. www.skycomputers.com
TEWS Technologies Reno, Nev. www.tews.com
Tales Computers Raleigh, N.C. www.thalescomputers.com
Themis Computer Fremont, Calif. www.themis.com
VMETRO Houston, Texas www.vmetro.com

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!