The coming revolution in sensor and signal processing

Aug. 25, 2020
High-performance embedded computing is reaping the benefits of open-systems standards, new FPGA architectures, and artificial intelligence for never-before-seen edge computing performance.

NASHUA, N.H. - Military sensor and signal processing technologies are going through revolutionary improvements, and offer to bring big enhancements to applications like radar, electronic warfare (EW), signals intelligence (SIGINT), high-performance edge computing, and anti-submarine warfare (ASW).

Big enablers that have been coming online over the past year or so include artificial intelligence (AI) and machine learning for so-called smart sensors, open-systems industry standards like the Sensor Open Systems Architecture (SOSA), new architectures for field-programmable gate arrays (FPGAs), information security, fast networking over copper and optical interfaces, and fast A/D and D/A conversion.

With these enhancements, systems designers can process more data than ever before; reduce, size, weight, and power consumption (SWaP); free-up slots in the embedded computing enclosure for additional capabilities; and place high-performance sensor and signal processing as close to receiver antennas as possible in SWaP-constrained applications like unmanned vehicles.

“There’s more data, more processing for the data ... it’s all more, more, more,” says David Jedynak, chief technology officer at embedded computing specialist Curtiss-Wright Defense Solutions in Ashburn, Va.

The influence of SOSA

The SOSA standard, supervised by the Open Group in San Francisco, aims generally at high-performance embedded computing, but is being developed

specifically with signal processing in mind. The standard seeks to tame the proliferation of open-systems VPX standards and create a manageable set of interoperability guidelines for aerospace and defense systems to enable a broad variety of components from separate vendors to work together easily.

“SOSA stands for sensor [Sensor Open Systems Architecture], and they are trying to make things more interoperable with fewer different flavors of modules, interfaces, and backplanes,” explains Rodger Hosking, vice president of embedded computing and signal processing specialist Pentek Inc. in Upper Saddle River, N.J.

“The SOSA effort is to reduce the degree of variability and to standardize such that multiple vendors can supply systems that are reusable and upgradable,” Hosking continues. “It’s driven by trying to save costs, and to deal with the complexity of any given system. Designers have to attack at least the part of a system at a higher, or common, level so the modules can talk to each other.”

Hosking calls this trend “an abstraction away from the very lowest level of system functions to higher-level, more consistent interfaces.” Consistency is the key, he says. “The whole mission of SOSA is to keep those interfaces as consistent as possible so you can have compatibility among different systems vendors.”

Consistent interfaces, as well as higher levels of systems integration and complexity, are at the heart of SOSA — particularly for sensor and signal processing applications — says Predrag Mitrovic, senior systems architect at high-performance embedded computing expert Abaco Systems in Huntsville, Ala.

“Everything is becoming more dense and integrated, which is reflected in the RF and optical backlink connectivity in the VPX ecosystem,” Mitrovic says. “In the past you have four to eight RF connects in a very dedicated space on the VPX backplane. Now this is going to 10 or 20 of those. This will allow for more connections over the backplane to ease maintenance in the future without worrying about doing the pre-wiring up-front.”

SOSA is catching on so quickly in the embedded computing industry for sensor and signal processing that some designers who don’t yet need SOSA-compliant hardware now are asking for it anyway. “We are seeing that our customers who don’t have the need for SOSA are asking for it because it is something they are hearing about,” Mitrovic says. “They are willing to take SOSA-aligned hardware today to be ready for the future.”

Advancements in FPGAs

One important trend in sensor and signal processing today involves improvements in embedded processors — particularly FPGAs. Xilinx Inc. in San Jose, Calif., offers the Versal Adaptive Compute Acceleration Platform (ACAP), which offers an integrated multicore embedded computer that can adapt to evolving signal and sensor processing algorithms. The Versal ACAP is customizable at the hardware and software levels to fit different applications and workloads.

The Xilinx Versal ACAP “is what they used to call a multiprocessing system-on-chip device, and this is one of those on steroids,” says Abaco’s Mitrovic.

The device offers heterogeneous acceleration, seeks to change how the FPGA is developed, and change how engineers work with FPGAs. “It’s so easy that any software engineer can program the FPGA, rather than requiring a special skillset,” Mitrovic says. “Xilinx is targeting four times the compute technology over what has been available from them.”

The Versal ACAP has the embedded sensor and signal processing community talking, and is generating substantial interest. “In the future there will be these highly integrated devices that integrate the Arm processor, other real-time processing units, and integrated AI cores,” Mitrovic says.

How quickly can these devices take-off in sensor and signal processing applications? It may be a while, but may catch-on more quickly than expected. “Our customers are very much open to this, but it will take time,” Mitrovic says. “Take the RFSOC [RF system-on-chip], for example. When it was introduced, it was considered revolutionary; traditional folks in the EW and radar industry were skeptical. For the first generation of RFSOC everyone was trying to understand the technology, but two years later we are seeing significant traction in some of the major government programs. We expect to see the same thing happening with Versal.”

One big advantage to the Xilinx Versal ACAP is extremely tight integration. “Versal is seven-nanometer technology, where the previous FPGA technology was 16 nanometers,” Mitrovic points out.

The Versal architecture also lends itself to AI and machine learning applications. “Historically signal processing has been done with floating point, but with machine learning you are dealing with lower resolution, and the processors have not been optimized to that kind of math very well,” explains Denis Smetana, product manager for the digital signal processing (DSP) product line at Curtiss-Wright Defense Solutions. “Xilinx has their Versal FPGA, which is designed to optimize that lower-resolution math to perform inferencing functions like those in neural networking for sensor and signal processing.”

In addition to the Versal ACAP FPGA, Xilinx also is supporting the new Advanced Microcontroller Bus Architecture Advanced eXtensible Interface 4 standard — better-known as AMBA AXI4 — a freely available open standard to connect and manage functional blocks in a system-on-chip to provide a standard interface for FPGA intellectual property (IP) reuse. This can help reduce the risks and costs of developing multiprocessor designs with many controllers and peripherals.

AXI4 is the fourth generation of the AMBA interface specification for the ARM processor, which to a growing extent is being integrated onto FPGAs for sensor and signal processing. Xilinx is offering a range of AXI4-compliant IP that has one standard interface for general-purpose embedded computing, DSP, and logic domains.

Engineers at Pentek also are AXI4 proponents. “We are pushing more standardized FPGA libraries with the AXI4 standard, which defines a standard interface to IP modules for interoperability among multiple vendors of the IP code that goes on FPGAs,” says Pentek’s Hosking.

“With AXI4, everybody has agreed to play by the same standard, and it is working quite well,” Hosking says. “People are becoming more efficient in putting new unique applications on FPGAs. This is really important for our FPGA-development customers.”

This kind of design methodology is moving to a graphically oriented design practice. “A designer can put AXI4 IP blocks on his work surface and connect them with a mouse by clicking the output of one block to the input of another,” Hosking says. “That saves a tremendous amount of FPGA design time. Doing FPGA design is a rare talent and skill that is hard to find in the marketplace. Anything that can make that easier will help.”

New levels of systems integration

Systems integration today for signal and sensor processing isn’t just about shrinking electronic components, but also seeks to add substantial capability to small electronic packaging. Designers at Mercury Systems in Andover, Mass., decided to take this concept a step further by ruggedizing and miniaturizing commercial data center technology for aerospace and defense embedded computing applications.

“We are bringing the entirety of the data center ecosystem into OpenVPX and embedded hardware that can be brought into deployed platforms,” says Shaun McQuaid, director of product management at Mercury. “Over the last 18 months we have taken a holistic view of what else is in the data center besides processing, to transpose those algorithms in the commercial world into high-performance edge computing.”

So how does Mercury stuff a data center into an embedded computing chassis? “I need processing, but also data storage,” McQuaid explains. “We launched a line of storage cards that leverages the M.2 standard.” M.2, formerly known as the Next Generation Form Factor (NGFF), describes internally mounted computer expansion cards and connectors. It replaces the mSATA standard, which uses the PCI Express Mini Card physical card layout and connectors, and is for solid-state storage applications — particularly in small devices like Ultrabook and tablet computers. M.2 solid-state data storage “is about the size of a gumstick,” McQuaid says. “It’s a lot of NVMe attached storage.”

In addition to M.2 data storage, Mercury engineers also designed an embedded PCI Express network switch, and a Switched Mezzanine Card (XMC) processing module on that switch. “Instead of having I/O come into a processor, we enable the I/O to come into the system and from there it can be distributed to the processors, FPGAS, and storage,” McQuaid says. “It’s all based on the latest generation of PCI Express interconnects to address the kinds of big-data problems that our industry has today.”

For complex and demanding military and aerospace applications like radar, EW, and SIGINT, “it’s critical to get these capabilities on those platforms,” McQuaid says. “You’ve gotta put that data center technology right on the platform so you can make good decisions based on that data.”

Thoughts that went into Mercury’s strategic decision to capitalize on data center technologies for high-performance embedded computing involve cost and capability. “Look at the commercial world and how better buying power should work,” McQuaid says. “What building blocks are critical?”

For Mercury, those building blocks taken from the data center and adapted to embedded computing consist of processors, I/O, general-purpose graphics processors (GPGPU) coprocessors, PCI Express switching, and FPGAs. “We want to make sure we have large amounts of memory, many cores on the CPU, and full functionality of those GPGPUs,” McQuaid says. “We want to leverage best-in-class solutions.

In addition, Mercury designers have focused on widely applicable electronics cooling solutions, including liquid cooling. “It’s those kinds of investments from chip-scale at the memory level all the way up through the mechanical structure necessary to cool these components,” McQuaid explains. “It’s only in the past year that we’ve had all these pieces come together.”

Artificial intelligence

Yet another trend in sensor and signal processing is blending AI and machine learning into systems designs. “There’s a trend in the past couple of years where sensors are gaining more intelligence, and interfaces go between sensors and processing modules sitting behind that,” says Curtiss-Wright’s Smetana.

“We’re moving the decision piece closer to the processor front-end to accommodate algorithms that can automate some of the analysis,” he says. “Take SIGINT where you have to identify different signals out there, and sometimes those signals have noise. These algorithms make interpretations more intelligently. If it’s never been seen before, it can make some interpretation in an attempt to classify the signal.”

Blending AI into signal and sensor processing still is in its infancy, but is gaining momentum quickly. “It’s still fairly new, and there’s a lot of hype around it,” Smetana says. “Still, there’s a need to understand how to make the best use of it — where it fits and doesn’t fit.”

SIGINT applications, in particular, are drowning in data. AI has the potential to make a quick analysis of incoming data streams, determine what data to keep and what to throw out. “Machine learning can be really good at figuring out what data is static, and what you want to look at,” says Curtiss-Wright’s Jedynak. “It can make our mission a lot more efficient.”

Take a typical airborne SIGINT mission, for example. “You have a two-hour mission that gathers 10 terabytes of encrypted data per hour,” Jedynak explains. “That 20 terabytes of data take up a lot of storage, but with machine learning we might be able to some triaging and use four terabytes of data.”

About the Author

John Keller | Editor-in-Chief

John Keller is the Editor-in-Chief, Military & Aerospace Electronics Magazine--provides extensive coverage and analysis of enabling electronics and optoelectronic technologies in military, space and commercial aviation applications. John has been a member of the Military & Aerospace Electronics staff since 1989 and chief editor since 1995.

Voice your opinion!

To join the conversation, and become an exclusive member of Military Aerospace, create an account today!