Image for post
Image for post
Credit: Alessia Pierdomenico/

Ron Ross, NIST Fellow, NIST Information Technology Laboratory

After four years of research and development, NIST has published a groundbreaking new security guideline that addresses the longstanding problem of how to engineer trustworthy, secure systems — systems that can provide continuity of capabilities, functions, services, and operations during a wide range of disruptions, threats, and other hazards. In fact, I think that Special Publication 800–160, Systems Security Engineering, is the most important publication that I have been associated with in my two decades of service with NIST.

I want to share what led me to this conclusion.

The Current Landscape

The United States, and every other industrialized nation, is experiencing explosive growth in information technology. These technological innovations have given us access to computing and communications capabilities unparalleled in the history of mankind.

These rapid advancements, and the dramatic growth in consumer demand for them, are occurring alongside a revolutionary convergence of cyber and physical systems, or cyber-physical systems (CPS). The worldwide distribution of these technologies has resulted in a highly complex information technology infrastructure of systems and networks that are difficult to understand and even more difficult to protect.

Today, we are spending more on cybersecurity than ever before. At the same time, we are witnessing an increasing number of successful cyberattacks by nation states, terrorists, hacktivists, and other bad actors who are stealing our intellectual property, national secrets, and private information. Unless we make some kind of radical change to the way we think about and fight these attacks, they are going to have an increasingly debilitating — and potentially disastrous — effect on the economic and national security interests of the United States.

The Basic Problem Is Simple

Our fundamental cybersecurity problem can be summed up in three words — too much complexity. There are simply too many bases — all the software, firmware, and hardware components that we rely on to run our critical infrastructure, business, and industrial systems — for us to cover as it is, and we’re adding to the number of bases all the time.

Increased complexity translates to increased attack surface — providing adversaries a limitless opportunity to exploit vulnerabilities resulting from inherent weaknesses and deficiencies in the components of the underlying systems that we have built and deployed. We can characterize this predicament as the N+1 vulnerabilities problem.

According to the Defense Science Board 2013 study done for the U.S. military, there are vulnerabilities that are known; those that are unknown; and those created by your adversaries after they have taken control of your system. Given this reality, there are vulnerabilities that we can find and fix, and a growing number of vulnerabilities that we cannot detect and therefore, remain unmitigated.

While we are making significant improvements in our reactive security measures, including intrusion detection and response capabilities, those measures fail to address the fundamental weaknesses in system architecture and design. These weaknesses can only be addressed with a holistic approach based on sound systems security engineering techniques and security design principles. This holistic approach will make our systems more penetration-resistant; capable of limiting the damage from disruptions, hazards, and threats; and sufficiently resilient so they can continue to support critical missions and business functions after they are compromised.

Engineering-Based Solutions

We have a high degree of confidence our bridges and airplanes are safe and structurally sound. We trust those technologies because we know that they were designed and built by applying the basic laws of physics, principles of mathematics, and concepts of engineering. If bridges were routinely collapsing and airplanes were frequently crashing, the first people we would call would be the scientists and engineers. They would do root-cause failure analysis, find out what went wrong, and fix the problem.

Cybersecurity efforts today are largely focused on what is commonly referred to as “cyber hygiene.” Cyber hygiene includes such activities as inventorying hardware and software assets; configuring firewalls and other commercial products; scanning for vulnerabilities; patching systems; and monitoring.

While practicing good cyber hygiene is certainly necessary, it’s not enough. This is because these activities don’t affect the basic architecture and design of the system. Even if we were to achieve perfection above the water line, we would still be leaving our most critical systems highly vulnerable due to our inability to manage and reduce the complexity of the technology.

The only way to address the N+1 vulnerabilities problem is to incorporate well-defined engineering-based security design principles at every level, from the physical to the virtual. These principles should be driven by mission and business objectives, stakeholder protection needs, and security requirements of the individual organization. While those solutions may not be appropriate in every situation, they should be available to those entities that are critical to the economic and national security interests of the United States including, for example, the electric grid, manufacturing facilities, financial institutions, transportation vehicles, medical devices, water treatment plants, and military systems.

A National Strategy Focused on Trustworthy Systems

Today, the cybersecurity threats to our government, businesses, critical infrastructure, industrial base, and people are as severe as threats of terrorism or the threats we experienced during the Cold War.

Overcoming these threats will require a significant investment of resources and the involvement of government, industry, and the academic community. It will take a concerted effort on a level we haven’t seen since President Kennedy dared us to do the impossible and put a man on the moon over a half century ago.

We can do it again, but the clock is ticking and the time is short. Creating more trustworthy, secure systems requires a holistic view of the problems, the application of concepts, principles, and best practices of science and engineering to solve those problems, and the leadership and will to do the right thing — even when such actions may not be popular.

I think that NIST Special Publication 800–160 is the first step we need to take toward securing the things that matter to us. It will be a grand challenge, but we Americans have a long history of achieving the impossible.

This post originally appeared on Taking Measure, the official blog of the National Institute of Standards and Technology (NIST) on November 15, 2016.

To make sure you never miss our blog posts or other news from NIST, sign up for our email alerts.

About the Author

Image for post
Image for post

Ron Ross is a computer scientist and Fellow at the National Institute of Standards and Technology. He specializes in cybersecurity, risk management, and systems security engineering. Ron is a retired Army officer who, when not defending cyberspace, follows his passion for NASCAR and takes care of his adopted rescue dog, Sophie.

Written by

NIST promotes U.S. innovation by advancing measurement science, standards and technology in ways that enhance economic security and improve our quality of life.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store