Zero-Trust Architecture

Mohammed Mahboubi
7 min readJun 23, 2020

The works of the Jericho Forum in the 2003 summer paved the way to de-perimeterization. Eventually, that concept evolved into what is now known as zero trust, which is a term coined by an analyst at Forrester in 2010.

In the meantime, the spread of IT mobility (teleworking & mobile device use), virtualization, cloud computing and the Internet of things had led to a shift in the way we consume data and we interact with the network. That behavioral change expunged traditional network boundaries and made the data available from outside the confine of the enterprise. Furthermore, perimeter-based network security has shown its limitations since when an attacker breaches the perimeter, lateral movement tactics become possible; because they are not many hindrances to explore the network to find new targets and subsequently gaining access to them. As a consequence, using perimeter-based security models became insufficient because we made the assumption that threats come always from the outside (never from the inside).

The new frontier now is the data. More specifically, the couple subject identity/data; where the subject is the entity that needs to have access to the data.

Architecture

If I have to summarize the Zero Trust architecture model using one sentence, I will use the motto: “never trust, always verify”. In that model, every access request is strongly vetted against context-enriched policies before granting access. We enforce micro segmentation and least privilege access to contain lateral movement. Eventually, artificial intelligence and automation will come into play to help us detect, identify, auto-heal and auto-learn from incidents.

Making the transition to a zero trust model is a journey based on six pillars: identity, device, application, data, infrastructure and network. These pillars encompass height systems that were cited in a paper about Zero Trust architecture published (in February 2020) by the NIST.

1. Continuous Diagnostics and Mitigation (CDM) system: It provides the policy engine with the information about the asset making an access request to assess its security posture (e.g., how is the cyber hygiene of the asset in terms of up-to-date patches, infection, etc…?).

2. Industry compliance system: It ensures that the enterprise remains compliant with any regulatory regime that it may fall under.

3. Threat intelligence feed(s): It provides information that help the policy engine make access decisions. The information may be related to new attacks, vulnerabilities or Indices of Compromise.

4. Data access policies: These are the attributes, rules, and policies about access to enterprise resources.

5. Public Key Infrastructure (PKI): It is responsible for generating and logging certificates issued by the enterprise to resources, subjects, and applications.

6. ID Access and Management system (IAM): It is responsible for creating, storing, and managing enterprise user accounts and identity records.

7. Network and system activity logs: This is the enterprise system that aggregates asset logs, network traffic, resource access actions, and other events that provide real-time feedback on the security posture of enterprise information systems.

8. Security Information and Event Management (SIEM) system: It collects security-centric information for later analysis. This data is then used to refine policies and warn of possible attacks against enterprise assets.

Zero trust logical components
Zero trust logical components

Security Considerations

The raison d’être of Cybersecurity is to protect the data and the cyber infrastructure. Data protection means to protect the data from unauthorized access, corruption and loss of control (e.g., ransomware). Cyber infrastructure protection means to preserve availability, control and visibility on the systems.

1. Ubiquitous encryption

Privacy became a strong concern (due to the massive-espionage scandal revealed in 2013 and later to the dramatic breaches that led to disclosing Personally Identifiable Informations reported by many news medias in the following years) in the cyber estate. So, encryption became ubiquitous. As a matter of fact, it is more complicated to perform network forensics due to pervasive encryption.

For example, DNS traffic is being encrypted using either TLS (DNS over TLS; aka DoT) or HTTPS (DNS over HTTPS; aka DoH). These two alternatives represent a strong rift between privacy advocates (pro DoH) and security administrators (pro DoT). But, they can also be leveraged by attackers to exfiltrate data; because encryption hinders traditional methods to detect anomalies in DNS traffic (such as large content in TXT or NULL records, a spike in DNS queries, or queries with long domains).

So, to improve network visibility machine-learning techniques should be leveraged to decide if an encrypted traffic is valid, innocuous or potentially malicious.

2. Control plane security

In a ZTA model, the control plane is the brain that makes the decision whether to grant an access request or not. That brain contains the policy engine (PE) and the policy administrator (PA). If these two components are compromised (intentionally or not) then the entire enterprise infrastructure will be at risk. The same goes for the data protection.

A special attention should be provided to grant them a strong resiliency. Furthermore, every access and every change of their configuration should be subject to an audit.

3. Identity theft

According to the Verizon Data Breach Investigations Report of 2020, the use of stolen credentials is the second top threat action varieties in breaches.

An attacker with valid credentials may be able to access resources for which the account has been granted access. When the attacker attempts lateral movements, if the compromised credentials are not authorized to access a particular resource, they will be denied access to that resource.

Furthermore, a contextual trust algorithm (TA) takes into account a user or asset’s recent history when evaluating access requests. That evaluation will allow the TA to detect access patterns that are out of normal behavior and allow the PE to acts accordingly by denying the access request.

So, in a ZTA model, a trust algorithm should be contextual not singular (on a per request basis) to allow behavioral analysis.

4. Storage of network information

The management tool used to encode access policies is the keystone in a ZTA model because it contains access policies to resources and it can give attackers information on the most valuable assets to compromise (those that have access to the desired data).

Management tools should have the most restrictive access policies and should be accessible only from nominative administrator accounts.

5. Interoperability

ZTA relies on different components (data sources providers) to make access decisions (e.g., user identity, credentials, resource, threat intelligence, SIEM, etc…). These components do not have a common and open standard to interact and exchange information. This may lead an organization to be locked, for a very long time, with a technology/vendor without being able to swiftly switch to another provider.

If the chosen ZTA solution, become ineffective or compromised after a security incident. How much will it cost to the organization to switch to a more effective solution? How long will it take to go through a transition program? At the end of such transition, will the considered solution still be effective?

An organization needs to evaluate the available products through a holistic approach; it will allow making well informed decisions before to take the leap for a ZTA. Otherwise, it will expose itself to a serious money pit. This situation can be shunned only by having visibility which is all the purpose of having open standards.

6. Artificial intelligence & Security automation

Due to the surge in data volume that SOC analysts have to evaluate (for a SMB [Small & Midsize Business] with a resilient architecture design, we should expect at least 4600 EPS [Event Per Second] which is approximately about 150 GB per day), dwell times (elapsed time to discover a breach) can only increase (circa 285 days in some cases). This situation called for the need to security automation in the SIEM field to soothe and assist analysts and to reduce dwell times. Security automation allows security components (that may be referred to as non-person entities [NPEs]) to act on security policies based on behavioral analysis. So, we are more and more dealing with concepts from AI and machine learning.

How these components should authenticate in a ZTA model has not been standardized.

Simply put, a NPE may use weak authentication methods (e.g., an API token key) to perform a security operation compared to a human user (which will be compelled to use multi-factor authentication methods).

If an attacker succeeds to interact with a NPE, he can lure the NPE into performing tasks on his behalf to make an achievement or he can retrieve the credentials used by the NPE to impersonate it.

Conclusion

Zero-trust architecture is not a mature technology yet but it allows improving the security posture of an organization by adapting to an open interconnected cyber world. Using any architecture model can never cancel the IT risk. We can only strive to manage that risk by reducing the impact of a compromise; if a security incident occurs then we have to contain it as quickly as possible. Cyber security is a continuous process that is fed by threat intelligence (using outside and inside sources) to improve the security posture as a whole. Due to the humongous volume of data, AI, machine learning and security automation are going to play a key role in the new cyber estate.

Zero-trust architecture should be viewed as a strategy to evolve to a safer cyber world. But, that evolution can succeed only if it is based on interoperability and open standards.

--

--