What Is Confidential Computing and Why It’s Key To Securing Data in Use?

essidsolutions

For decades, security professionals have focused on protecting data at rest and data in transit. Good tools did not exist for protecting data in use. Data in use resides in volatile memory (RAM), unencrypted and available to compromised applications, firmware, operating systems, and hypervisors. Confidential computing changes this by using hardware to erect walls between application processing environments and the underlying operating system and other applications. 

We look at the key challenges of protecting data in use in the face of the growing cyberattacks and which emerging technologies can assist in securing them, with a particular focus on confidential computing. 

Challenges of Protecting Data in Use

Many tools, techniques, and procedures (TTP) exist for protecting data and applications at rest and in transit. These include access controls, encryption, hashing, and network segmentation. However, data in use has always been a weakly protected target.  

 

Figure 1: Normal Processing Attack Surfaces

Figure 1 shows two types of server environments: virtual and discrete (physical). When an application runs, it accesses data on an encrypted data source. The data is encrypted as it travels to the application. When it reaches the server, the data is decrypted and stored in RAM for application access. In traditional environments, threat actors can potentially access the data in memory.

Across these server examples, the data attack surfaces include:

  • Device firmware
  • Device drivers
  • A hypervisor
  • Operating systems
  • Applications

If a threat actor compromises any element of the attack surface with the right TTP, she can collect or change data stored in RAM. In some cases, this access can also expose encryption keys. In the case of hypervisor compromise, multiple servers are also open to attack.

In the past, protecting this largely unprotected information has relied on keeping the server clean. Technologies like the UEFI (Unified Extensible Firmware Interface) have helped. However, UEFI cannot fully address all gateways leading to sensitive data in use.

One of the biggest challenges is in the cloud. Multiple customers usually share cloud resources. Keeping applications and data isolated is difficult, and server compromise can expose multiple organizations to data theft.

Learn more: Data Protection and Backup: Top 4 Cloud Native Solutions Enterprises Can’t Ignore

Emerging Solutions That Can Help

Three emerging solutions seek to reduce the data-in-use attack surface: homomorphic encryption, secure element, and confidential computing.

Homomorphic encryption

Homomorphic encryption allows processes to access and use data that remains encrypted in memory. It enables access to the data without access to the secret key. Further, the results of the processing are also stored encrypted. 

In a Wired articleOpens a new window , Andy Greenberg, author of the book “Sandworm: A New Era of Cyberwar and the Hunt for the Kremlin’s Most Dangerous Hackers”, describes a simple example of how this might work with a homomorphically encrypted search engine. Employees seeking information about employee earnings would submit an encrypted employee ID or name to the search engine. The search engine uses encrypted data to search the encrypted employee database. The encrypted information is returned to the requesting user. At no time is the search string or the returned data decrypted.

Secure element

Kaspersky defines a secure elementOpens a new window as “…a chip that is by design protected from unauthorized access and used to run a limited set of applications, as well as store confidential and cryptographic data.” An excellent example of a secure element is the Trusted Platform Module (TPM).

A secure element can be used to:

  • Detect attempts to modify system elements
  • Create a root of trust for a system
  • Securely create and store keys
  • Generate random numbers

The UEFI is a good example of how a secure element can be used for firmware and OS verification on boot up.

Confidential computing

Confidential computing uses hardware to reach and surpass the objectives of secure elements and homomorphic encryption. Based on a Trusted Execution Environment (TEE), confidential computing is intended to protect: 

  • Data confidentiality
  • Data integrity
  • Code integrity
  • Code confidentiality (depends on TEE solution)

Table 1 shows one perspective of the differences between confidential computing and the other two approaches. As shown, confidential computing more significantly reduces a system’s processing attack surface.   

Table 1: Comparisons (Confidential Computing ConsortiumOpens a new window )

Learn more: 6 Best Practices To Implement Cloud Strategy and Data Protection in the Hybrid Work World

Confidential Computing in Operation

The Confidential Computing Consortium (CCC) specifies what should fundamentally constitute a TEE. Multiple solutions currently exist or are emerging to implement TEEs. They are based on using the hardware to enforce code and data accessibility during processing. For this article, I focus on Intel’s Security Guard Extensions (SGX)Opens a new window . ARM and AMD also have solutions.

Figure 2 shows an SGX TEE in operation. Step 1 is to create an application compatible with SGX. Solutions exist to help move legacy containers, VMs, and applications to TEE environments without changing any code.

Figure 2: Intel SGX TEE

In Step 2, a secure enclave is created, including allocated encrypted memory. The application needed is run within the enclave, and all processed data is included in enclave secured memory. Host firmware and software are blocked from access to the enclave. The only way into the enclave is via a gateway, as depicted by the yellow square.

Step 3 shows a call to the application. The request is sent via the gateway, where all access is secured via TEE attestation. The application processes the request and returns the result in Step 5. 

Nothing can access the enclave outside of processes that successfully pass attestation. However, these processes still have no access to the encrypted enclave memory space. If a threat actor compromises a device, he cannot access anything running in TEE enclaves. This protects the integrity of the applications and the confidentiality and integrity of the data.

Multiple enclaves can run on the same machine. The enclaves can communicate with each other, but this only happens via enclave gateways after attestation. Because of gateway controls, enclaves from one cloud customer can be completely isolated from the cloud operating environment and other customer applications. Resource sharing becomes very secure.

Learn more: Data Clean Rooms: A Secret Weapon Against Data Breaches and Data Security Vulnerabilities

Final Thoughts

TEEs are still in the early stages, so careful analysis is needed when selecting a solution. While solution providers adhere to the basics supplied by the CCC, some offer even more. Further, multiple vendors provide a seamless incremental move from current environments to a TEE.

TEE is not necessarily a replacement for homomorphic encryption and secure elements. Instead, it is another layer of security that should be considered when managing risk. As with any controls, on-premises solutions depend on budget, risk appetite, and controls capabilities. However, this is not the case with the cloud.

Cloud service providers cannot provide proper customer separation without placing the customer resources on separate servers or by using TEEs. TEEs are an essential solution to look for when considering moving sensitive information and critical systems to the cloud.

Do you think confidential computing is more dependable than homomorphic encryption and secure element for securing data in use? Comment below or let us know on LinkedInOpens a new window , TwitterOpens a new window , or FacebookOpens a new window . We would love to hear from you!