Beyond 40 Gigabit Ethernet

March 4, 2019

It seems like it was only yesterday that we introduced 40 Gigabit Ethernet to the rugged embedded market – in the form of the DSP282A, SWE440 and SWE540 – but technology moves fast. 100 Gigabit Ethernet is becoming increasingly deployable – and, at Abaco, we’ll be looking to ensure that, when our customers need it, we can offer it. 

There’s a lot of effort involved in preparing the groundwork. What silicon is already out there – and what’s in the pipeline? What about channel compliance for the new, higher switching rates along with the implications for signal integrity, PCB construction, connector choices, and more?

For a vendor to roll out a well-rounded portfolio, at least four key products will be required:

  • Processing boards with a 100GbE Network Interface Controller (NIC)
  • 10/40G Ethernet switch with 100G uplinks
  • 100G Ethernet switch
  • FPGA processing boards

Desirable characteristics

Looking at the Payload (SBC) first, the desirable characteristics for the NIC include:

  • Two 100GbE ports
  • x16 PCIe Gen3 or x8 Gen4 host side interface to ensure no data bottlenecks
  • RDMA over Converged Ethernet (RoCE) 2.0 support, either hardware or software implementation
  • Standard driver support under common operating systems
  • Extended temperature range
  • Extended availability
  • IEEE1588 Precision Time Protocol hardware support

Several manufacturers including Intel, Broadcom, Chelsio and Mellanox are all offering what looks like appropriate silicon.

These NICs would perhaps be implemented on a 6U OpenVPX single board computer with one or more Intel processors at its heart. Each node would have a dual port 10/40/100G Ethernet NIC attached to the processor chipset via a PCI Express switch. This becomes important because PCIe Gen4 does not appear on Intel chips designated for embedded use for some time, and some NICs have only x8 PCIe. Because these ports are often Gen4 capable, a switch can map a host-side x16 Gen3 port to a device x8 Gen4 port, thus preserving the bandwidth necessary to be able to drive 100GbE at full speed.

Head start

We also expect our customers to be asking for switch products that have multiple 10/40G Ethernet ports and one or two 100G uplinks. These can be used for data aggregation, and to produce fatter tree networks. Here, we have a head start with our existing SWE540A or RES3000 switch products. The former is a 6U OpenVPX board, with 20 ports of 40GBASE-KR4/10GBASE-KX4 on the Data Plane and 16 ports of 1000BaseKX on the Control Plane.

In parallel, we believe customers will be looking for a rugged, boxed product that would present multiple 10/40GBASE-T copper ports, with one or two 100GBASE-SR4 fiber uplink ports, using 38999 connectors with a mix of copper and fiber inserts.

And speaking of connectors…  It is commonly thought that next-generation VPX connectors such as TE’s RT3 or an Amphenol equivalent will be required to carry copper interconnects of 100GBASE-KR4 at their 25G x4 rates. We’re aware of third parties that have demonstrated 100GBASE-KR4 over existing RT2 connectors – but is this is viable and reproducible over full temperature range?

Difficult choices

Then, there’s the question of the 100G Ethernet switch fabric. What’s capturing our attention here is that such switch fabric devices can be very power-hungry: one example we’ve seen consumes around 140W just for the switch fabric device in its quiescent state, with an additional 1W per 40G port and 1.4W per 100G port for each active interface. Our customers have always had to face the sometimes difficult choices between maximum performance on the one hand and minimum power consumption on the other: it will be interesting to see how this plays out.

FPGAs that support SerDes ports capable of carrying 100G Ethernet are readily available – and in fact are often the device of choice for early stage investigation of channel compliance testing at these rates.

The above is an outline of some of our thinking as we help our customers achieve the higher levels of performance that their increasingly sophisticated applications demand. Our roadmap is still fluid – so if you have any thoughts on the subject, we’d value the opportunity to engage with you. Contact me at peter.thompson [at] abaco.com

About the Author

Peter Thompson | Sr Bus Dev Mgr

Peter Thompson is senior business development manager for High Performance Embedded Computing. He first started working on High Performance Embedded Computing systems when a 1 MFLOP machine was enough to give him a hernia while carrying it from the parking lot to a customer’s lab. He is now very happy to have 27,000 times more compute power in his phone, which weighs considerably less.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!