Secure operating systems and code verification tools are the front line when it comes to protecting mission-critical and classified information on today’s digital battlefield.
By Courtney E. Howard
Wars are fought on the digital battlefield, in the realm of computer and information networks, each day. Even during what might appear to be solely physical confrontations, information warfare (also commonly referred to as iWar and info war) plays a strong role. Each side of a conflict uses information warfare, offensively and defensively, to gain an edge. It is, therefore, a constant threat and a never-ending battle, regardless of whether the adversaries are cognizant of the war being waged on them and their computer networks. There are no front lines and no boundaries in an information war.
Information warfare can take many forms. From an offensive standpoint, iWar tactics range from withholding information and distributing disinformation to gaining or denying access to and taking control of computer systems. Defensive iWar efforts center on protecting one’s own data, computer systems, and networks.
Networks are the battlefields
For decades, the worry has been that adversaries will gain unauthorized access to classified, mission-critical information, such as troop locations, movement, and strategies. The very real threat now is that opponents will not only access this data, but gain complete control of the computer system, including its operating system (OS) and the various software applications sitting atop the OS.
After the security barrier of a single computer on a network is breached, it is easy to infiltrate other systems within that network. The use of commercially available, and oftentimes free, tools such as network sniffers and TCP/IP traffic analyzers, enable unauthorized parties to obtain vital system information, such as user names and passwords, traffic in and out of the system, and specifics about the operating systems employed.
Taking that a step further, hackers could presumably then seize control of an entire network–such as, say, the global information grid (GIG). The fear is that they could control critical infrastructures: oil and gas pipelines, electric power grids, nuclear power stations, telecommunications and telephone networks, financial and funds transfer systems, and radio and television signals.
National governments, defense organizations, terrorist cells, and even individuals acting alone are waging war in the electronic arena. Security experts reveal that hundreds of times daily, hackers attempt to invade critical infrastructure facilities in the United States alone.
Security standards
Security for military and national information systems has been the concern of officials in the Department of Defense (DOD) and the National Security Agency (NSA) ever since computers first started communicating with each other, explains a representative of Green Hills Software Inc., provider of secure, real-time operating systems (RTOSs) in Santa Barbara, Calif.
Together, DOD and NSA officials developed the Common Criteria for Information Technology Security Evaluation (abbreviated as Common Criteria) international standard for computer security certification. The Common Criteria security standard, known as ISO/IEC (International Organization for Standardization/International Electrotechnical Commission) 15408 and currently in version 3.1, defines security levels for information systems and networks.
LynuxWorks’ technology enables multiple operating systems–such as Microsoft Windows, Linux, and the company’s own LynxOS RTOS–to run on the same, secure computer system.
Following the completion of a Common Criteria security evaluation, an IT product or system is assigned an Evaluation Assurance Level (EAL), ranging from EAL 1 through EAL 7. Solutions are certified by National Information Assurance Partnership (NIAP), a U.S. government initiative by the National Institute of Standards and Technology (NIST) and the NSA.
Measuring up
“Every traditional operating system–for example, Microsoft Windows, UNIX, Linux, and VxWorks–can be crashed or commandeered by a high-school hacker,” says a Green Hills Software representative. According to the NIAP Web site, the Microsoft Windows XP and Vista, Linux, and Sun Solaris operating systems have achieved EAL 4, and are regarded as offering a level of protection sufficient against inadvertent attempts to breach the system security.
An unnamed NSA security evaluator translates EAL 4 certification to mean “secure as long as nothing is connected to it.” Jonathan S. Shapiro, assistant research professor at The Johns Hopkins University Department of Computer Science, adds: “Security experts have been saying the security of the Windows family of products is hopelessly inadequate. Now there is a rigorous government certification confirming this.” Further, “EAL 1–4 certifications are essentially meaningless and have wasted immense amounts of money and time,” said David Kleidermacher, Green Hills Software’s chief technology officer, during his presentation (“Anatomy of an EAL 7 Security Certification”) at the Embedded Systems Conference (ESC) in San Jose last month.
To be fair, it should be mentioned that Kleidermacher’s company, Green Hills Software, offers the only operating systems to date to be certified to EAL 6+. Green Hills Software’s Integrity RTOS has been selected for a variety of military and aerospace applications that require EAL6+ High Robustness certification, such as the avionics, weapons systems, engines, and radar of more than 20 military aircraft models. “Integrity has been evaluated by the NSA and certified by NIAP to EAL6+ High Robustness for the protection of classified information against well-funded sophisticated attackers,” explains a representative.
Over the past three years, the Common Criteria methodology has come under serious criticism. The Common Criteria evaluation process has been considered: discriminatory to free and open-source organizations and tools, very costly, terribly slow, and focused more on accompanying documentation than the product itself.
“The Common Criteria framework for evaluation is essentially good,” Kleidermacher continues, noting that it has been applied badly; that is, a majority of time, effort, and money has been spent on conducting EAL 1–4 certifications.
“The process for evaluation, certification, and accreditation in security needs to be changed,” admits Steve Blackman, director of aerospace and defense business at LynuxWorks, maker of embedded operating systems in San Jose, Calif. “The process is too long and expensive to meet the needs of the security communities, and results in outdated systems being used long after their intended life and ad-hoc systems being applied.” In March of this year, NIAP officials agreed with many of the criticisms.
“Based on the results of evaluations against the Basic and Medium Robustness Protection Profiles and comments from vendors and our customers, NIAP has determined that the current U.S. Protection Profile Robustness model needs to be revised,” reads a NIAP statement issued in March. “NSA is creating a Standard Protection Profile, which will replace any corresponding U.S. Government Protection Profile. We will work with industry, our customers, and the Common Criteria community to create these Protection Profiles.
The PERC Ultra virtual machine from Aonix is employed in Taranis, a $166 million, next-generation unmanned aerial vehicle for the United Kingdom’s Ministry of Defence.
“The first generation of these Protection Profiles will take into account the current assurance that is achievable for a technology and the Evaluated Assurance Level will be set based on the availability of the documentation, test plans, and tools needed to obtain consistent and comparable results. Future increases in the Evaluated Assurance Level of each Protection Profile will require more refinement of the assurance criteria, more detailed test plans, and greater disclosure of evaluator evidence, testing performed, and vulnerabilities found. NIAP will work with the Common Criteria community to ensure that Common Criteria 4.0 supports these requirements.”
Risk and the RTOS
Security starts with the operating system. “If someone gains system-level privileges at the OS, they can do anything they want in a system and violate all security protections,” explains Blackman. “We see this in movies when someone hacks a password and claims, ‘I’m in.’ That statement refers to access at the OS level in a system.”
Not all operating systems are created equal. Indeed, most of those employed today were not developed with security in mind; rather, they were developed for the average, benign, and untrained user. System security was virtually non-existent at the computer level. In years and decades past, a “secure” computer was one to which physical access was restricted to authorized personnel only. Later, an entire anti-virus and firewall industry arose, which “barely does the job of protecting our home PCs,” Blackman continues. These methods are proven to be ineffective today.
“You need to start at the foundation and provide security in an OS, from its design inception, in order to provide a secure approach to cyber security,” Blackman explains. “With domain isolation and information flow control as the key elements in an OS, you can build security into a system capable of being used and evaluated to high assurance. The model was and is called: the separation kernel. Using a separation kernel approach, you can build MILS [Multiple Independent Levels of Security] architecture-based systems that contain the elements needed to secure systems.”
It is not uncommon for organizations to employ role-based authorization, encryption technology, and physical log-on keys, codes, and chips to validate users before providing restricted access to protected systems. “Unfortunately,” explains Blackman, “restricted access can be converted into system-level access if the OS allows it via hacking. This is what happens in most Windows-, Linux-, and UNIX-based operating systems. The separation kernel approach restricts access to system-level privileges and is far superior.”
LynuxWorks’ LynxSecure OS was designed to deliver security at the OS level. “The information flow control is built into the kernel,” Blackman describes, “and unauthorized accesses are denied based on the security policies that are defined when building the system.”
The LynxSecure separation kernel is being incorporated in the U.S. Navy’s Common Display System (CDS), an $83 million project in support of DDG 1000 and Aegis modernization efforts and part of the Navy’s Open Architecture Computing Environment initiative.
CDS–intended for DDG 1000 Zumwalt-class, next-generation destroyers as well as modernized Aegis-class guided missile destroyers–is a survivable, configurable, high-assurance workstation that provides an operator access to several shipboard applications simultaneously. The CDS display console system, like other military projects, requires adherence to strict, high-assurance security requirements. The LynxSecure separation kernel and hypervisor enable several guest operating systems running at different security levels–such as classified and unclassified–to execute simultaneously on a single system.
“Security lies at the heart of the future of embedded systems,” Gurjot Singh, chief executive officer of LynuxWorks, says, noting that LynxSecure will enhance the security of the Navy’s CDS display system.
Code conscious
Software plays key roles in all forms of information warfare, which can enable a competitive advantage in a warfighting scenario. In pursuit of this objective, says Kelvin Nilsen, chief technology officer of Aonix North America Inc. in San Diego, software enables: interception and deciphering of enemy communications; gathering of sonar, radar, lidar, and infrared imaging from unmanned aircraft; triangulation to locate the source of radio broadcasts and gunshot sounds; the assimilation of information gathered from several sources; and human understanding of this information by providing situational awareness.
“Software also plays critical roles in assuring reliable information delivery to intended recipients, while protecting information from being intercepted by unintended recipients and preventing enemy disinformation activities from corrupting the content of secure communications,” Nilsen explains. Offensive information warfare may include: activities to disable or corrupt the information gathering and distribution activities of an enemy, and the delivery of certain types of information or disinformation to the enemy’s soldiers and/or citizens, he says.
Software applications are composed of source code, a collection of statements written in a computer programming language. It, too, plays a role in information warfare.
“Software code can be strengthened to protect and secure information, and be exploited as a source of security vulnerabilities,” says John S. Greenland, Jr., vice president of business development at LDRA Technology Inc. in San Bruno, Calif. “It depends upon the application, but code can be used to separate various classes of information, such as classified and unclassified; to encrypt sensitive data across general information channels, such as the Internet; and as a subversive agent to hack into systems.”
The “holy grail” of secure software code would be: a locked-down code base that is proven to be worked on only by nationals of a certain country or certain people with access to security levels of information, Greenland explains. The prevalence and rampant use of open source code, “shareware,” “freeware,” and Linux, an open-source version of the UNIX operating system, make this ideal a rare occurrence. All is not lost, however; tools exist that help enable high levels of security by enforcing and automating proper software processes and best practices, and by examining code sequences for vulnerabilities. The LDRA test bed is a static analysis tool that uncovers security vulnerabilities in C code using the CERT C secure coding standard.
The Computer Emergency Response Team (CERT) program is part of the Software Engineering Institute (SEI), a federally funded research-and-development center at Carnegie Mellon University in Pittsburgh. In 1988, the Defense Advanced Research Projects Agency (DARPA) charged the SEI to set up a center–which was named the CERT Coordination Center (CERT/CC)–to coordinate communication among experts during security emergencies and to help prevent future incidents.
“Along with the rapid increase in the size of the Internet and its use for critical functions, there have been progressive changes in intruder techniques, increased amounts of damage, increased difficulty of detecting an attack, and increased difficulty of catching the attackers,” a CERT official reveals.
The CERT secure coding standard is brand new; version 1.0 was finished just months ago, and it is starting to be adopted by organizations, Greenland says. The nuclear energy sector is particularly interested in the standard and code-verification tools to help solve the cyber-security issues with regard to nuclear reactors. Officials in the nuclear energy sector “are directly taking LDRA’s secure coding product and applying it to legacy source code to determine what they are dealing with in terms of security vulnerabilities.” The LDRA test bed is also being used to perform vulnerability analysis on source code in the military and aerospace space, he continues. Given the amount of legacy code used in military and aerospace platforms and continued efforts to update electronics systems, he expects industry adoption of code-verification tools to grow.
These legacy systems that have been around for 30 years employ millions of lines of code, Greenland acknowledges. “Security is atop the list of concerns out there, but there’s a lot of other legacy work to do and security is just one leg of it, so they need to analyze this stuff quickly.
The LDRA product scans through the C code, finds suspicious patterns or sequences, and flags them such that a software engineer can remove or modify the code sequence. “It is an automated way to look at large bodies of code and make them more secure,” describes Greenland.
At the language level
Software code is written in a computer programming language, the most well-known of which are Java, C++, and Ada. Most legacy military and aerospace systems are written in Ada, although up-and-coming software developers are trained in the use of Java. “As a programming language, Ada is much more secure than C and C++, and it is no less secure than modern programming languages like Java,” Nilsen admits, “however, there is an issue that most legacy software targeted a different operating environment than is common today.” Legacy warfighting systems were far less interconnected than is common today; typical legacy warfighting equipment was not connected to a global information grid. As a result, the software developed to run on that legacy equipment was not subjected to the same security-assurance scrutiny that has been introduced in recent years, and there is a need to modernize many existing software systems.
“Many projects pursue additional objectives, such as improved functionality, maintainability, portability, and scalability while restructuring software to enable security audits,” Nilsen continues. “In the process, converting all or part of a legacy system to Java is attractive because the Java language is more portable than Ada, it provides a larger assortment of reusable off-the-shelf software components and developer tools, and it is far easier to recruit competent developers familiar with the language.”
Aonix provides technology designed to help members of the defense sector to modernize legacy software systems, enabling use of Java in mission-critical and safety-critical systems. During ESC last month, Aonix officials announced support for executing real-time Java on secure partitioned kernels, including VxWorks MILS 2.0 from Wind River in Alameda, Calif., and the PikeOS MILS-compliant operating system from SYSGO AG in Germany. “The use of MILS-compliant operating systems makes it possible to combine software that is certified to different levels of security on the same processor, running each software component in a different partition of the secure operating system,” Nilsen notes. “In this sort of deployment, the operating system assures that information does not leak from one security level to a different one.”
The PERC Ultra virtual machine from Aonix is employed in Taranis, a $166 million, next-generation unmanned aerial vehicle (UAV) for the United Kingdom’s Ministry of Defence (MOD). Officials from BAE Systems, the industry lead and prime contractor for the program, and QinetiQ, provider of the Reasoning Layer of the Autonomy Mission System, both of London, selected the Aonix solution for the project.
The UAV’s Autonomy Mission System controls the flight path and sensor usage to achieve a mission. The Taranis Reasoning Layer within the system runs complex decision-making and optimization algorithms on an embedded processor. QinetiQ chose PERC because it enables the use of existing Java code and libraries in an embedded environment, and provides support for soft real-time operation, says a representative.
“Taranis is the latest win for PERC Ultra in the growing autonomous vehicle market, and it’s particularly exciting to be working with QinetiQ because the mission requirements of Taranis represents proof of the power and productivity that Java technology can bring to complex, high-intelligence embedded and real-time systems,” says Adrian Larkham, Aonix U.K. general manager.
“We chose to develop the Taranis Reasoning Layer with Java due to the broad range of capabilities of that platform, but we needed PERC Ultra to support practical deployment in a real-time, embedded environment,” says Peter Baynham, managing director of QinetiQ’s Command & Intelligence Systems business. “Support for our chosen architecture and its ability to integrate with existing libraries were also key factors.”
Given that the focus of Taranis is on targeting and attack, rather than on surveillance and reconnaissance, as is common of current fielded unmanned systems, security is also of paramount importance as it makes its way to the digital battlefield of today.
Global networking
The Global Information Grid (GIG) is defined by officials in the United States Department of Defense as a “globally interconnected, end-to-end set of information capabilities for collecting, processing, storing, disseminating, and managing information on demand to warfighters, policy makers, and support personnel.” The GIG encompasses owned and leased communications and computing systems, software code and programs, data, security systems, and other associated applications and services.