Switch fabric high-speed serial network technology enters the mainstream for demanding signal processing
Editor's note: GE Intelligent Platforms changed its name to Abaco Systems on 23 Nov. 2015 as a result of the company's acquisition last September by New York-based private equity firm Veritas Capital.
Technology focus, 24 Aug. 2010 -- Switch fabrichigh-speed serial networking for demanding embedded parallel processing architectures is benefitting from industry standards and the OpenVPX Multiplane Architecture to take its place in high-performance aerospace and defense applications like radar, signals intelligence, and electronic warfare.
Intensive aerospace and defense embedded computing applications like these all have a simple and basic need to move massive amounts of data as quickly through their computing systems as possible, with few, if any, delays from software code overhead, communications authentication, conflicts with data packets, or other data roadblocks and bottlenecks.
Traditional backplane databus approaches to this task in high-end and complex applications essentially have reached the extent of their capabilities, which is leading systems integrators to find new ways of moving mountains of data quickly and reliably. These approaches often involve the latest generations of high-speed serial switch fabric serial network technology organized in a division of labor that not only makes the most of their separate advantages, but that also considers upfront costs and prospects for long-term industry support.
The idea is to find the most efficient ways to move data through the system along conduits best suited for the different varieties of data running through these systems, while using widely supported commercial standards to assure systems integrators working on these high-end applications that they will have reasonably priced support for years to come, and can take advantage of commercially available technology as it evolves for upgrades and technology insertion.
Lessons of parallel processing
Computer scientists for years have understood one of the best ways of processing a big chunk of data is to break the job down into several smaller jobs, and process them simultaneously on several different central processing units. The steadily increasing capability of today's microprocessors has made the job of parallel processing relatively easy, compared to years past. The difficulty these days is to get the data to the processors quickly enough for efficient computational operations.
Think of a factory that increases the speed and capacity of each station along the assembly line, but neglects its conveyor belts. Each station on the line might be able to increase its throughput separately, but overall the factory cannot move any faster because the old, creaky conveyor belts can't feed materials to each station quickly enough to keep it at capacity.
It is much the same with high-end parallel processing computer systems. Today's microprocessors crunch data at screaming-fast rates, but the overall computer system continues to plod along if data cannot reach the microprocessors fast enough. Computer architects often refer to this phenomenon as "keeping the processors fed."
Traditional backplane databus structures, in essence, cannot keep the processors fed. Parallel databus structures move data in parallel ranks much like soldiers marching in formation. The problem, however, is their limited speed. Instead, today's parallel processing computer architectures use high-speed serial data switch fabrics that move data bits just one at a time, but at blinding speeds.
Today's switch fabrics
Today's most dominant serial switch fabrics are Serial RapidIO, PCI Express, and Gigabit Ethernet. InfiniBand is moving into niche applications, but still has a loyal following, while others that were players in the past, such as StarFabric, seem to be falling by the wayside.
"If a vendor had gotten started with something like InfiniBand or StarFabric, and the programs are ongoing, they will continue to supply those switch fabrics," says Rodger Hosking, vice president of signal processing specialist Pentek Inc. in Upper Saddle River, N.J. "But with new systems coming up, vendors will look around to see who is offering which products, and will see far more offerings for PCI Express and Serial RapidIO."
The primary advantages of these switch fabrics involve their commercial viability, widespread industry understanding, and clear upgrade paths for the future. Not only are these fabrics open-systems commercial standards that move data quickly, but they also come included with some of today's most popular microprocessors used in parallel-processing embedded applications. PCI Express, for example, is included with the most advanced processors from Intel Corp. in Santa Clara, Calif., while PCI Express and Serial RapidIO come with PowerPC-based microprocessors from Freescale Semiconductor in Austin, Texas. "If you are Intel-oriented, PCI Express is built in, and is free of charge; it comes along for the ride," points out David Pepper, product manager and technologist for single-board computers at GE Intelligent Platforms Charlottesville, Va.
"PCI Express is used on commercial applications; all PCs have PCI Express, and the market has developed a commercial infrastructure," says Ben Klam, vice president of engineering at embedded computer specialist Extreme Engineering Solutions Inc. (X-ES) in Middleton, Wis. "Now you are seeing that with Gigabit Ethernet."
PCI Express, moreover, is fast. The first generation of the switch fabric moves data at 2.5 gigabits per second, while the second generation moves data at 5 gigabits per second. "These gigabit serial links are arranged as bonded lanes, four or eight lanes bonded together, that form a logical channel," says Pentek's Hosking. "Every time you add a lane, you add more bandwidth to that logical channel."
Gigabit Ethernet, which also is included on today's popular microprocessors, is among the most ubiquitous and widely understood switch fabrics, with a broadly installed base of 1-gigabit and 10-gigabit versions, and also with a future that includes 40- and 100-gigabit versions.
"Gigabit Ethernet is agnostic about what you are talking to," explains Andy Reddig, president and chief technology officer at embedded signal processing expert TEK Microsystems Inc. in Chelmsford, Mass. "You can talk to an X86 or PowerPC [microprocessors], and you could be talking inside the box or to a computer on the other side of the world; it is agnostic to that."
Field programmable gate arrays from Xilinx Inc. in San Jose, Calif. now have 1-gigabit Ethernet built in, and in the future will come with 10-gigabit Ethernet as part of the package. "I'm looking forward in two or three years when Xilinx FPGAs will have a 10-gigabit Ethernet interface inside them," Reddig says. "When they do that, I won't have to use a lot of FPGA gates to implement it, and then 10-gigabit Ethernet will really take off."
Serial RapidIO is a fast switch network with commercial support from Freescale Semiconductor, and has been a household name in high-end embedded computing since switch networks entered the conversation. Not only has it been around relatively for a long time, but it also has a reputation for being among the best switch network choices for low network overhead and low latency.
"Ethernet is great for sharing with many people, whereas Serial RapidIO or PCI Express is much better for point-to-point communications, explains Aaron Frank, product marketing manager for switches, routing, and network products at Curtiss-Wright Controls Embedded Computing in Ottawa.
When it comes to InfiniBand, some experts in the embedded computer industry say this switch fabric is all but dead. "It has zero traction," says TEK Micro's Reddig. "InfiniBand has virtually disappeared from embedded today. It did a lot of good things, but in power-, size-, and weight-constrained applications it is not very efficient."
Others are not so sure, however, and say InfiniBand's obituary may have been written prematurely. "If you had asked me a year ago, I would have said there is nothing going on with InfiniBand outside of the large data centers," says GE's Pepper. "But what's happening in the deployed world, the radar guys are saying InfiniBand is a good, reliable fabric that is scalable and with low latency."
Pepper, who is based in Huntsville, Ala., says InfiniBand, like RapidIO, has a longstanding reputation for low overhead and predictably fast latency in applications where the fast passing of data is absolutely critical. "Certain folks get concerned about the potential latency and overhead associated with Gigabit Ethernet, and those who are sensitive to that want to use InfiniBand."
InfiniBand also may be used as a technology bridge in the industry's transition to 40-gigabit Ethernet. A 40-gigabit-per-second version of InfiniBand may become widely available before 40-gigabit Ethernet does, says Marc Couture, director of applications engineering for advanced computer solutions at Mercury Computer Systems Inc. in Chelmsford, Mass.
There are other data interconnects that are coming into play in today's embedded computing architectures, although some would argue that these are simply high-speed and specialized point-to-point interconnects, and not really switch fabrics at all.
Among these are interconnects specifically designed to exchange data among field-programmable gate arrays. The first, called Aurora, is only for communication between FPGAs made by Xilinx. The other is called SerialLite, and is for linking FPGAs from Altera Corp. in San Jose, Calif.
"Aurora is good for us because it is the most lightweight interface to get from card to card if there are FPGAs on both ends," says TEK Micro's Reddig. "It gets 99.9 percent of theoretical maximum across the wire. If you have four cards with FPGAs, then Aurora is a really good choice, but it is not a fabric; it is a point-to-point serial interface."
As for Altera's version of this kind of interconnect, "the latest version is SerialLite II, which is a very lightweight, fast interconnect for Altera FPGA communications," says Mark Littlefield, director of the Curtiss-Wright Controls Embedded Computing office in Chatsworth, Calif. "There are certain parts of the large radar problem and signals intelligence where you want to flow data from one FPGA to the next, and then to the next, as a signal-processing chain," he says. It is low overhead, low latency, and it is reliable."
Still, in an open-systems world, serial interconnects that are vendor-specific can make some systems integrators nervous. There may be an open-systems solution in the future, however. "We are starting to see from our customers somewhat of a move away from that," Littlefield says. "Gazing into my cloudy crystal ball, the market will want to go to a standards-based protocol before too much longer, and that will probably be 40-gigabit Ethernet XAUI," which is short for Attachment Unit Interface.
"XAUI can be used with 10-gigabit or 40-gigabit Ethernet," Littlefield says. "It is very lightweight, even when compared to RapidIO, so it is good for point-to-point, with the same order of overhead with Aurora and SerialLite."
Several years ago, debate raged in the embedded computing community over which switch fabric was the best for different applications. Those who were there refer to these days as the "fabric wars." Today's approaches -- particularly in embedded computing applications -- are less about pitting one switch fabric against another, and more about industry consensus in a standard approach that seeks to move data simultaneously on several different information conduits optimized for each use.
OpenVPX Multiplane Architecture
Enter the OpenVPX Multiplane Architecture. This is an industry-standard parallel processing model for high-end applications like radar processing, signals intelligence, and electronic warfare that relies on high-speed serial data switch fabrics like Serial RapidIO, PCI Express, and Gigabit Ethernet organized logically to move data throughout embedded computing systems quickly and efficiently.
The OpenVPX architecture, despite its name, applies not only to VPX embedded computing architectures, but also to VXS, AdvancedTCA (ATCA), as well as other parallel computing approaches that rely in switch fabrics, explains Mercury's Couture. "The defense world revolves around VPX, while the PICMG telecommunications world revolves around AdvancedTCA," Couture says. Ruggedized applications, he says, most likely will be VPX systems because no plans are in the works to ruggedize AdvancedTCA sufficiently for most aerospace and defense applications.
The OpenVPX Multiplane Architecture essentially divides data communications tasks within a parallel processing system into different virtual planes, or layers, and organizes them according to the kind of data that each layer handles. The different virtual layers of the OpenVPX architecture consist of management plane, control plane, data plane, expansion plane, and user plane.
At the lowest level is the management plane, which is always on and functionally is decoupled from the other planes. The management plane monitors system health, performs diagnostics for maintenance and troubleshooting, and also can do prognostics for predictive maintenance.
Speed, overhead, and latency are not nearly so important for the management plane as is the ability to communicate with the widest variety of chips, boards, boxes, sensors, and other systems as possible. Ethernet, in one or more of its versions is a preferred switch fabric for the management plane. Next is the control plane, which can act like a traffic cop in the network by keeping conflicts to a minimum and handling interrupts so they cause minimal disruption. Ethernet also is a preferred fabric for the control plane.
The third plane in the OpenVPX architecture is the data plane, which calls for a high-speed switch fabric to transfer data among processors or other components on a board, or among separate boards in a system, or even among separate boxes in a complex system. For this plane, system integrators can use either a distributed architecture or a switch through which modules on the network route their communications. For the data plane, the preferred switch fabrics tend to be PCI Express, Serial RapidIO, sometimes InfiniBand, and some designers also are considering Ethernet running at 10 gigabits per second or faster.
The next layer is the expansion plane, which is intended for high-throughput data communications typically among two specific entities in the system. In practice, this often means the expansion layer is not a network at all, but is simply a point-to-point communications link that handles raw data moving at tremendous speeds. The expansion plane is eight lanes of PCI Express, says Mercury's Couture, and is especially useful in enabling general-purpose graphics processing units (GPGPUs) to communicate at fast speeds with central processing units.
Next comes the user plane, in which systems integrators can add their proprietary information management approaches.
The whole idea of segregating data paths in the OpenVPX Multiplane Architecture is to minimize the possibility of conflicting data traffic. "The whole sense of contention goes away," Couture explains. "This model enables more processing in the platform in real time."
In this environment, systems integrators have some serious decisions to make when choosing which switch fabrics to use to meet system specifications, differentiate themselves from the competition, and create systems that are affordable. "It's an interesting design decision to architect which fabrics to use for which functions," says Pentek's Hosking.
"PCI Express might not be the best protocol for embedded systems. It was developed primarily as a motherboard expansion of PCI," Hosking says. "But you can find PCI Express easily and expensively. It's so popular because for the huge number of devices and silicon the use PCI Express to cover so many different functions."
At the same time, Serial RapidIO has advantages for complex embedded systems because it supports several different masters, and can support more variety in embedded systems than PCI Express can do easily, Hosking says. Would it make sense to use both in the same system? "We use both all the time," Hosking says.
Gigabit Ethernet is particularly useful for inter-system communications, Hosking continues. "We don't see it as much for real-time links between system elements as much as we see it for control and management functions among system elements. It has a lot of overhead, and Gigabit Ethernet was not designed to be extremely deterministic and low latency, but it is robust, and will go through a 3-foot thick wall; that's the beauty of Ethernet."
For higher-speed data flow, however, systems designers should not dismiss 10-gigabit Ethernet. "10-gig Ethernet for some real-time data transfers can be useful because it has margins in the channel over what you require," Hosking explains. "It's cheap, and parts are available." Echoes GE's Pepper, "10-gigabit Ethernet seems to be a real sweet spot on the data plane."
Serial RapidIO, meanwhile, works in a similar way to PCI Express for high-speed networking interconnects, yet it may be the better choice for extremely complex systems, Hosking says. "Since it supports multiple masters and processors in the system, it is a more elegant solution because each master can discover all devices connected, including processors. It can communicate with multiple processors, and allow each processor to communicate with multiple peripherals. It has some more layers and overhead than PCI Express, but it gives you the freedom to architect a complex system."
Also for extremely complex systems, Mercury just introduced a protocol-agnostic multi-fabric interconnect technology for embedded computing systems that are based on Intel microprocessors, called the Protocol Offload Engine Technology (POET), which forms a bridge between microprocessors and switch fabrics.
POET implements its intellectual property in Xilinx or Altera FPGAs and implements standard interfaces to bridge between processors and Serial RapidIO and 10 Gigabit Ethernet. Future releases will support 40 Gigabit Ethernet and InfiniBand, as well as offload protocols such as RoCE (RDMA over Converged Ethernet), an InfiniBand-over-Ethernet standard.
Gateway to optical computing
As switch fabrics progressively become faster and faster, systems designers also must consider if they want to implement fabrics as wire or optical interconnects on optical fiber or free-space lasers. Designers may face these decisions more quickly than they would like, because the faster a switch fabric moves data, the shorter its physical length can be when using copper wire.
"The migration from 10/100 Ethernet to Gigabit Ethernet was relatively painless, and now 40-gigabit and 100-gigabit Ethernet are starting to be looked at," says Curtiss-Wright's Frank. "Once you go beyond Gigabit, you move to a much shorter read in the network. Gigabit Ethernet can go to 500 meters over copper cable, but 10-gigabit is limited to 50 meters over copper." Use of 40-gigabit Ethernet over copper wire "is pretty much unheard of," he says.
Advances in telecommunications technology, however, is yielding affordable solutions for implementing extremely high-speed switch fabrics over optical fiber and free-space lasers. "All these interconnects are available in optical, and in Gigabit- and 10-gigabit Ethernet we have a lot of interest," Frank says.
The differences in cable runs between copper wire and optical are enormous. Systems designers, for example, could run Gigabit- and 10-gigabit Ethernet over single-mode optical fiber for as far as 40 kilometers. Using low-cost fiber, designers could run Gigabit Ethernet for 100 to 500 meters, which for many applications is more than enough.
The differences in 40-gigabit Ethernet are even more dramatic. Using copper wire, engineers could move data no farther than 1 to 10 meters, whereas they could run 40-gigabit Ethernet links for kilometers when using optical fiber, Frank says. Even with 100-gigabit Ethernet, interconnect links could run for kilometers. In addition to cable lengths, optical fiber or free-space laser interconnects also offer advantages in resistance to electromagnetic interference over copper cable.