Where Does Zero Trust Begin and Why is it Important?

By: Kathleen M. Moriarty, CIS Chief Technology Officer

center-for-internet-security-cto-blog

Zero trust is an important information security architectural shift. It brings us away from the perimeter defense-in-depth models of the past, to layers of control closer to what is valued most – the data. When initially defined by an analyst at Forrester, zero trust was focused on the network providing application isolation to prevent attacker lateral movement. It has evolved to become granular and pervasive, providing authentication and assurance between components including microservices.

As the benefits of zero trust become increasingly clear, the pervasiveness of this model is evident, relying upon a trusted computing base and data centric controls as defined in NIST Special Publication 800-207. So, as zero trust becomes more pervasive, what does that mean? How do IT and cybersecurity professionals manage the deployment, and maintain an assurance of its effectiveness?

Zero Trust Architecture: Never Trust, Always Verify

Zero trust architectures reinforce the point that no layer of the stack trusts the underlying components, whether that be hardware or software. As such, security properties are verified to assure they are as expected for every dependency and interdependency on first use and intermittently (the dynamic authentication and verification tenets of zero trust). Each component is built or constructed as if the adjoining or dependent components may be vulnerable. As such, each individual component assumes it must be the component to assure the trust level asserted and must be able to detect a compromise or even an attempted compromise.

This can be a bit of a confusing paradigm in that zero trust instills the principles of isolation at every layer. This enforces the point of so-called zero trust between components, while verification of the security properties and identity is continually performed to provide an assurance that expected controls are met. One component may choose not to execute if the expected properties of the dependencies are not assured. Zero trust architectures assert to “never trust, always verify.” This enables detection and prevention of lateral movement and privilege execution for each component, and results in higher assurance for the system and software.

Core Tenets

Identity, authentication, authorization, access controls, and encryption are among the core tenets of any zero trust architecture where deliberate and dynamic decisions are continuously made to verify assurance between components. While zero trust is often discussed at the network layer as a result of its origin as a concept by Forrester, the definition of zero trust has evolved considerably over the last decade to be a pervasive concept that spans infrastructure, device firmware, software, and data.

Zero trust is discussed often as it relates to the network with isolation of applications by network segments, ensuring controls such as strong encryption and dynamic authentication are met. Zero trust can also be applied at the microservices level, providing assurance of controls and measurements via verification between services. The granular application of this model further enforces prevention and detection for attacker lateral movement.

Infrastructure Assurance

Zero trust begins with infrastructure assurance; it has become pervasive up the stack and across applications. A hardware root of trust (RoT) is immutable with a cryptographic identity bound to the Trusted Platform Module (TPM). The infrastructure assurance example instils the tenets of a zero trust architecture. Upon boot, the system first verifies that the hardware components are as expected.

Next, the system boot process begins verifying the system and each dependency against a set of so-called “golden policies” which include expected measurements, attested to with a digital signature using the cryptographic identity in the TPM. If one of the policy comparisons do not match, the process may be restarted, or the system boot process may be halted. While there are several hardware and software-based RoT options, from boot the resiliency guidelines for firmware and BIOS are generally followed in the development of the policies and measurements used.

Attestations are signed by a RoT at each stage of the boot process and are used to both identify the relying components as well as to provide an assurance of trust, thus at the very basic level identifying the system and components are as required. The dependencies may be chained or may be verified individually. These attestations are also provided at runtime, supporting the zero trust requirement for dynamic authentication and access control in this case, for infrastructure components. Attestations aid in the requirement to verify identity of components, essential for providing assurance of said component.

Any attacker that has infiltrated the component or software would need to survive this dynamic and periodic verification and authentication to remain a threat. The attacker would also have to figure out how to escalate privileges or move laterally between isolated components that don’t trust each other.

Trusted Control Sets

The Trusted Computing Group’s (TCG) Reference Integrity Manifest based off of NIST’s Firmware Resiliency Special Publication provide the trusted controls for policy and measurement of firmware. As you go up the stack, trusted control sets to provide the verification necessary for zero trust include the CIS Controls and the CIS Benchmarks. Trusted third parties such as NIST, CIS, and TCG provide a necessary external and established vetting process to set control and benchmark requirements. An example of this would be attestations used to comply with a CIS operating system or container benchmark at a specified level of assurance.

What Evidence Supports this Shift to Zero Trust?

Interestingly, about the same time that zero trust architectures began to take shape, Lockheed Martin developed their Cyber Kill Chain (in 2011). The Cyber Kill Chain was first defined to separate the stages of an attack, enabling mitigation and detection defenses between stages. The MITRE ATT&CK Framework is used more predominantly today with the foundation provided by Lockheed Martin’s model, plus identified gaps learned from use and the evolving threat landscape. For the purpose of this paper, the Cyber Kill Chain will be used to simplify the correlation process, but can be abstracted to the MITRE ATT&CK Framework.

The Lockheed Martin Kill Chain was developed in response to the ever-increasing sophistication of advanced persistent threat attacks (APT) that had shifted to include supply chain attacks. By implementing defenses and controls between attack phases, including requirements to prove identity (dynamically) via authentication, attackers’ lateral movements or privilege escalation attempts could be more easily detected. Moving detection and prevention earlier in the kill chain is ideal to prevent attacks from being successful (e.g., exfiltration of data or disruption within the network).

Applying detection and prevention techniques pervasively in the stack and across applications and functions with dynamic access controls to verify authentication attested components, supports zero trust architectural tenets and enables detection early in the kill chain. The evidence of the tenets of zero trust working is clear when you consider its deployment in concert with kill chain detection controls as evidenced by attacker dwell time patterns.

Reducing Dwell Time

Since the use of the kill chain was first invoked, attacker dwell time (the time an attacker remains on a network undetected) has been dramatically reduced. This can be clearly seen with both the global and regional dwell time changes as different regions adopted the Cyber Kill Chain and zero trust defenses. According to FireEye’s M-Trends annual reports, the global median dwell time was 229 days in 2013 and in the 2020 report is 56 days. The regional numbers support the success of this architectural approach as well, with known disparity in adoption of the zero trust architectural pattern and the defense frameworks of the Kill Chain and MITRE ATT&CK.

The United States was known to be an early adopter of both. Selecting 2017 as an example, the median dwell time in the Americas was 75 days and in Asia was 172 days. Smaller organizations or those with less resources in any region at any point in time, may experience wildly different dwell times from larger and well-resourced organizations. The dwell time numbers do help demonstrate the success of these controls with tangible data.

Zero trust evolved from a network-only definition, where applications were segregated, to a more granular level in support of detecting unexpected behaviors between all components. The logical connection between zero trust and the Lockheed Kill Chain demonstrates the clear value of the models. This also helps to project the future for zero trust as increasingly data-centric, built upon a foundation of isolated components from boot in infrastructure attesting to their verified identity and assurance levels up and across the stack to the microservices level.

NIST SP 800-207 defines zero trust as follows:
“Zero trust (ZT) provides a collection of concepts and ideas designed to minimize uncertainty in enforcing accurate, least privilege per-request access decisions in information systems and services in the face of a network viewed as compromised. Zero trust architecture (ZTA) is an enterprise’s cybersecurity plan that utilizes zero trust concepts and encompasses component relationships, workflow planning, and access policies. Therefore, a zero trust enterprise is the network infrastructure (physical and virtual) and operational policies that are in place for an enterprise as a product of a zero trust architecture plan.”

Tenets of Zero Trust

The following is a lists sourced from the NIST CSRC publication SP 800-207

  1. All data sources and computing services are considered resources
  2. All communication is secured regardless of location
  3. Access to individual enterprise resources is granted on a per-session basis
  4. Access to resources is determined by dynamic policy
  5. All owned and associated devices are in the most secure state possible
  6. All resource authentication and authorization are dynamic and strictly enforced
  7. Collect as much information as possible on current state of network infrastructure to improve security posture

An objective of the Lockheed Kill Chain is to proactively detect threats. The tenets of zero trust aid in prevention and detection along the phases of the kill chain.

Lockheed’s Kill Chain

  1. Reconnaissance: Harvesting email addresses, conference information, network data
  2. Weaponization: Coupling exploit with backdoor into deliverable payload
  3. Delivery: Delivering weaponized bundle to the victim via email, web, USB, etc.
  4. Exploitation: Exploiting a vulnerability to execute code on victim’s system
  5. Installation: Installing malware on the asset
  6. Command & Control C2: Command channel for remote manipulation of victim
  7. Actions on Objectives: With hands on keyboard access, intruders accomplish their original goals

Lockheed Kill Chain mapped to NIST Zero Trust Tenets

About the Author

Kathleen Moriarty
Chief Technology Officer

Kathleen Moriarty, Chief Technology Officer, Center for Internet Security has over two decades of experience. Formerly as the Security Innovations Principal in Dell Technologies Office of the CTO, Kathleen worked on ecosystems, standards, and strategy. During her tenure in the Dell EMC Office of the CTO, Kathleen had the honor of being appointed and serving two terms as the Internet Engineering Task Force (IETF) Security Area Director and as a member of the Internet Engineering Steering Group from March 2014-2018. Named in CyberSecurity Ventures, Top 100 Women Fighting Cybercrime. She is a 2020 Tropaia Award Winner, Outstanding Faculty, Georgetown SCS.

Kathleen achieved over twenty years of experience driving positive outcomes across Information Technology Leadership, IT Strategy and Vision, Information Security, Risk Management, Incident Handling, Project Management, Large Teams, Process Improvement, and Operations Management in multiple roles with MIT Lincoln Laboratory, Hudson Williams, FactSet Research Systems, and PSINet. Kathleen holds a Master of Science Degree in Computer Science from Rensselaer Polytechnic Institute, as well as, a Bachelor of Science Degree in Mathematics from Siena College.