By David Sheets
ASHBURN, Va. – At its core, trusted-computing works to ensure that computing systems operate safely, securely, and correctly every time. Trusted computing matters at every level of operation, whether it be the processor level, software level, or system level. Each layer of a computing system ensures that a system can operate securely. Because malicious attackers are able to poke at all layers of a system, securing only one single layer often is not the most effective use of resources.
Attacks are becoming increasingly sophisticated. Examples include Rowhammer, Meltdown, Spectre, and others. System designers need to consider many attack vectors. The security of hardware components can no longer be assumed. System designers must verify and monitor their hardware for future vulnerabilities. However, secure hardware alone is not enough. For a system to be secure, its software also must be secure. Securing software can include hardening free operating systems like Linux, or software built from the ground up to address security, such as StarLab Crucible.
After securing the software, the security architect’s work is still not done. Today, systems must integrate and interoperate to complete a mission. That means that network and physical interfaces that connect individually secure elements of a system also must be analyzed for vulnerabilities and then locked down to mitigate possible attacks.
The good news is that many groups and documents are available to help guide the architect and monitor a trusted computing system. Here are some of the most critical documents that system security architects need to understand.
At the hardware level, NIST 140-2 can provide guidance on evaluated cryptographic hardware. Common Criteria, administered by National Information Assurance Partnership (NIAP), can provide trust in the design process for systems and security. One recent example is the evaluated Curtiss-Wright DTS-1, the embedded industry’s first commercial off-the-shelf (COTS) data-at-rest (DAR) network attached storage (NAS) solution for secure data storage. For hardware security, the Trust Computing Group (TCG) provides guidance on certification for Trusted Platform Modules (TPM).
Within the U.S. Department of Defense (DOD), the Anti-Tamper Executive Agency (ATEA) provides guidance on physical security for military systems. On the cyber security front, the Risk Management Framework (RMF), presented in a series of National Institute of Standards and Technology (NIST) and FIPS (Federal Information Processing Standards) documents, provides a mechanism to evaluate system security across confidentiality, integrity, and availability, as well as guidance on how to meet required security levels.
Overlays also can be used with RMF to further refine the guidance based on particular system applications, classification level, or other aspects of system operation. Much as DO-178B provides guidance on safety critical software, and DO-254 provides guidance on safety-critical hardware for aviation platforms, DO-326A provides similar types of guidance on cyber security for aviation. For programs that require more concrete and easily implementable guidance, the sets of Security Technical Implementation Guides (STIGs), managed by the Defense Information Systems Agency (DISA), can provide an easy and helpful resource if an applicable STIG is available for the system being protected.
Underpinning the integrity and confidentiality of security for trusted computing is the use of cryptographic algorithms. Cryptography should not be considered as a static discipline. Because processing capabilities are always improving, designers need to understand their security requirements and how those requirements relate to and help drive decisions about which cryptographic algorithms and key sizes need to be used. For example, many systems will have requirements as to how long information confidentiality must be maintained. Those requirements will influence the selection of algorithms and key sizes.
Systems designers also need to understand symmetric cryptographic algorithms, such as AES, and where they are being employed. In addition to symmetric algorithms, security architects also must understand secure hashing algorithms that are used during image and data integrity verification, such as SHA-2 or SHA-3, and asymmetric algorithms that are used to sign and verify images, and are also used in key agreement schemes, such as ECC or RSA.
Apart from existing algorithms and guidance, designers also must be aware of advances in quantum computing power and how those advances might impact the security of asymmetric cryptographic algorithms. Security architects must keep an eye towards understanding how newly developed algorithms, such as those now being competed by NIST, might be integrated into their systems once new implementations of accepted quantum resistant algorithms are available.
Going forward, it’s imperative to understand the trusted computing implications for every program. Trusted computing cannot be an afterthought. Instead, it must be built in from the start of every program to ensure that appropriate risks are understood and appropriate mitigations are put in place.
That does not mean that every program needs to implement the highest levels of security, but it does mean that every program should do the analysis to make the decision about what level of security is needed based on which risks can be tolerated and which risks are unacceptable.
Trusted computing is hard. Unlike many other disciplines in engineering, it’s not just about trying to solve complicated problems. The added complexity and challenge comes from trying to solve complicated problems while facing adversaries who are constantly advancing and evolving.
Even more difficult, unlike most enterprise systems that can accept periodic updates and relatively inexpensive upgrades, deployed embedded systems need to be able to stay relatively static while staying resilient in the face of advancing attack capabilities.
Trusted computing can impact every facet of a computing system, including hardware, software, system integration, maintenance activities, and testability. By ensuring that the program addresses security and trusted computing issues early in the program life cycle, program risks and costs can be managed. It’s when security is addressed at the end of the program that most programs run into real problems.
Related: Trusted computing and the challenges of cryptographic algorithms in quantum computing
While implementing trusted computing is difficult, it is not an insurmountable problem. It just requires work and starting with the appropriate expectations. By diligently working through potential issues, and working closely with suppliers and vendors, programs can successfully provide secure solutions on time, and on budget.
David Sheets is senior principal security architect at Curtiss-Wright Defense Solutions. Contact him by email at [email protected].