The Three States of Data and Their Final Frontier
Data security has traditionally focused on protecting data in two of its three states: at rest and in transit. Data at rest — stored on disks, in databases, in backup archives — is encrypted using well-established technologies like AES-256. Data in transit — moving across networks between systems — is protected by TLS, IPsec, and other transport encryption protocols. These protections are mature, widely deployed, and generally effective.
But data has a third state: data in use. When an application processes data, it must be decrypted and loaded into system memory in plaintext. For the duration of that processing — whether it takes milliseconds or hours — the data sits unencrypted in RAM, vulnerable to a class of attacks that no amount of storage encryption or network security can prevent.
This vulnerability is not theoretical. Memory scraping attacks, cold boot attacks (physically extracting RAM to read its contents), and compromised hypervisor attacks (where a malicious cloud administrator or rogue process reads a virtual machine’s memory from the host operating system) have all been demonstrated. In a shared cloud environment, where multiple tenants’ workloads run on the same physical hardware, the risk is particularly acute: a vulnerability in the hypervisor or host operating system could expose one tenant’s unencrypted data to another.
Confidential computing closes this final encryption gap. By using hardware-based security features built into modern CPUs — and now GPUs — confidential computing encrypts data even while it is being processed. For the first time, it is possible to encrypt data across all three states, creating an end-to-end encryption model where sensitive data is never exposed in plaintext outside the hardware security boundary of the processor itself.
The Hardware Foundation: AMD SEV-SNP, Intel TDX, and NVIDIA GPU TEE
Confidential computing relies on hardware security features embedded in the processor itself, not software-level encryption that could be compromised by a privileged attacker. The three leading implementations come from AMD, Intel, and NVIDIA, each addressing different layers of the compute stack.
AMD SEV-SNP (Secure Encrypted Virtualization – Secure Nested Paging)
AMD’s approach to confidential computing has evolved through several generations. The current state of the art, SEV-SNP, provides hardware-enforced memory encryption and integrity protection for virtual machines running on AMD EPYC processors.
With SEV-SNP, each virtual machine receives a unique encryption key generated and managed by a dedicated security processor on the CPU. All memory pages belonging to that VM are encrypted with its unique key before being written to DRAM. When the VM’s vCPU accesses memory, the data is decrypted transparently at the CPU boundary. Neither the hypervisor, the host operating system, nor other VMs on the same physical host can read the encrypted memory contents.
The “SNP” (Secure Nested Paging) component adds integrity protection, preventing a malicious hypervisor from replaying, reordering, or tampering with a VM’s memory pages. This addresses a class of attacks where the hypervisor manipulates the memory mapping rather than reading memory contents directly.
AMD SEV-SNP is available on third-generation and later EPYC processors and is supported by all major cloud providers. Google Cloud, Microsoft Azure, and AWS all offer confidential VM instances running on AMD SEV-SNP-enabled hardware. Empirical analysis shows SEV-SNP excels in scenarios requiring lower performance overhead and simpler implementation, making it particularly well-suited for financial services and healthcare workloads.
Intel TDX (Trust Domain Extensions)
Intel’s Trust Domain Extensions take a similar approach with some architectural differences. TDX creates hardware-isolated “Trust Domains” (TDs) that are protected from the hypervisor, BIOS, and other software running on the host.
Each Trust Domain receives its own set of encryption keys, managed by a new CPU component called the TDX Module. Memory belonging to a TD is encrypted and integrity-protected, with the encryption and decryption happening at the CPU’s memory controller — invisible to software at all levels.
TDX also provides architectural support for attestation — the process by which a Trust Domain can cryptographically prove to a remote party that it is running specific, unmodified code on genuine Intel hardware with TDX protections active. This attestation capability is critical for use cases where multiple organizations need to collaborate on sensitive data without trusting each other’s infrastructure. Intel’s Trust Authority service provides a managed attestation infrastructure that simplifies this verification process.
Intel TDX is available on fourth-generation Xeon Scalable processors (Sapphire Rapids) and later. Google Cloud has made Confidential VMs generally available on C3 machine series with Intel TDX, and Azure supports TDX-based confidential VMs. TDX provides particularly comprehensive attestation capabilities suited for government, defense, and high-security AI workloads.
NVIDIA GPU TEE: Extending Confidential Computing to AI Accelerators
A critical development that has transformed the practical value of confidential computing is NVIDIA’s extension of Trusted Execution Environments to GPUs. The NVIDIA H100 Tensor Core GPU is the first GPU to support confidential computing, using a hardware-based TEE anchored in an on-die hardware root of trust.
In confidential computing mode, the H100 works with CPUs that support confidential VMs (CVMs), using an encrypted bounce buffer to move data securely between the CPU and GPU. This extends the TEE from the CPU to the GPU, enabling confidential computing for the AI training and inference workloads that increasingly drive enterprise computing.
The performance impact is remarkably modest: for the majority of typical large language model queries, the overhead remains below 5%, with larger models and longer sequences experiencing near-zero overhead. NVIDIA GPU confidential computing, initially launched in private preview in July 2023, became generally available with CUDA 12.4 and is now central to enterprise AI infrastructure in regulated industries.
Why Confidential Computing Matters Now
Confidential computing has been in development for years, but several converging trends have made it urgent in 2026. The confidential computing market has grown from $9.3 billion in 2025 to a projected $15.2 billion in 2026 — reflecting a critical inflection point in enterprise adoption.
AI Workloads on Sensitive Data
The most compelling use case for confidential computing is AI training and inference on sensitive data. Over 70% of enterprise AI workloads are expected to involve sensitive data by 2026. Healthcare organizations want to train AI models on patient records. Financial institutions want to run fraud detection models on transaction data. Government agencies want to use AI for intelligence analysis. In all cases, the data is too sensitive to process in plaintext on shared cloud infrastructure, even with contractual and policy protections.
Confidential computing — now extended to GPUs through NVIDIA’s H100 — allows these organizations to process their most sensitive data on cloud infrastructure with hardware-enforced guarantees that the cloud provider, other tenants, and even compromised administrators cannot access the data during processing. This is a fundamentally stronger security guarantee than any contractual or policy-based approach.
Regulatory Requirements
Data protection regulations worldwide are tightening. The EU’s GDPR, the Digital Operational Resilience Act (DORA), the US healthcare HIPAA framework, financial services regulations like PCI-DSS, and emerging AI-specific regulations including the EU AI Act all impose requirements on how sensitive data is processed. Gartner predicts that 60% of enterprises will evaluate Trusted Execution Environments by the end of 2025, driven in significant part by DORA’s explicit requirement to protect data in-use — with 77% of organizations reporting they are more likely to adopt confidential computing because of DORA compliance needs.
The Confidential Computing Consortium’s newly-created Regulators and Standards Special Interest Group, chaired by Google Cloud’s Solomon Cates with JPMC’s Michael Guzman as Vice Chair, signals the industry’s recognition that regulatory alignment is becoming the primary adoption driver.
Multi-Party Data Collaboration
Confidential computing enables a new category of application: secure multi-party computation on combined datasets. Two healthcare systems that cannot share patient data due to privacy regulations can run a shared AI model inside a confidential computing enclave, where their combined data is processed but neither party can access the other’s raw data. Financial institutions can collaboratively detect fraud and anti-money-laundering patterns across their customer bases and across jurisdictions without exposing individual customer records.
This “data clean room” use case has enormous potential but requires the strong isolation guarantees that only hardware-based confidential computing can provide. Software-based approaches to secure multi-party computation exist (homomorphic encryption, secure enclaves) but are either too slow for practical use or provide weaker security guarantees.
Advertisement
The Ecosystem Takes Shape
The confidential computing ecosystem has matured significantly, evolving from a research curiosity to a deployable technology with broad commercial support. The Open Confidential Computing Conference (OC3) 2026 and the Confidential Computing Summit reflect an industry that has moved from awareness-building to deployment-focused discussions.
Cloud Provider Support
All three major cloud providers now offer confidential computing services:
Google Cloud offers Confidential VMs on AMD SEV-SNP (N2D machine series) and Intel TDX (C3 machine series), plus Confidential GKE Nodes for containerized workloads. Google has been particularly aggressive in promoting confidential computing, providing hardware-rooted attestation and pricing confidential instances at a modest premium over standard instances.
Microsoft Azure offers Confidential VMs on both AMD SEV-SNP and Intel TDX, as well as Azure Confidential Ledger for tamper-proof data storage and Azure Attestation for verifying confidential computing environments.
Amazon Web Services offers Nitro Enclaves, a somewhat different approach that creates isolated compute environments within EC2 instances. AWS has also introduced confidential computing options based on AMD SEV-SNP for specific instance families.
The Startup Ecosystem
A growing ecosystem of startups is building tools and platforms that make confidential computing more accessible. Enclaive, a Berlin-based startup founded in 2022, closed a EUR 4.1 million seed round in February 2026, co-led by Join Capital and the Amadeus APEX Technology Fund. Its Multi Cloud Platform (eMCP) enables organizations to deploy existing containerized applications in confidential Kubernetes environments across multiple cloud providers without code modifications. The company has earned recognition including the 2025 Prix Croissance at the InCyber Forum and the 2024 TeleTrust Product of the Year.
Other startups in the space include Anjuna Security, which provides a platform for running unmodified applications in confidential computing enclaves across multiple cloud providers, and Fortanix, which offers a confidential computing platform focused on key management and data security.
Open Standards
The Confidential Computing Consortium (CCC), a Linux Foundation project with members including AMD, Intel, Google, Microsoft, Red Hat, and NVIDIA, is developing open standards for attestation, interoperability, and API specifications. For 2026, the CCC’s Technical Advisory Committee is focused on delivering adoption-focused technical guidance, with particular emphasis on full-stack Confidential Computing and Secure and Sovereign AI. These standards are critical for preventing vendor lock-in and enabling multi-cloud confidential computing deployments.
Performance and Practical Considerations
Confidential computing is not free. Hardware memory encryption and integrity verification impose performance overhead that varies depending on the workload, the hardware generation, and the implementation.
Current AMD SEV-SNP and Intel TDX implementations typically show performance overhead in the range of 2% to 10% for compute-intensive workloads. Memory-intensive workloads — those with large working sets and high memory bandwidth requirements — may see higher overhead due to the encryption and decryption operations on every memory access. NVIDIA H100 GPUs in confidential computing mode show less than 5% overhead for typical LLM inference, making confidential AI workloads practically viable.
For many workloads, particularly AI inference where the security value of protecting model weights and input data is high, a 5-10% performance overhead is an acceptable trade-off. For ultra-latency-sensitive workloads like high-frequency trading, even small overhead may be problematic.
Several practical considerations affect real-world deployment.
Memory limits. Confidential VMs may have lower maximum memory allocations than standard VMs on the same hardware, as the encryption mechanism consumes some memory for metadata and key management.
Live migration. Traditional VM live migration — moving a running VM from one physical host to another without downtime — is complicated by confidential computing, as the VM’s memory encryption keys are tied to the specific physical CPU. Solutions exist but add complexity.
Debugging. Standard debugging tools that inspect application memory cannot access encrypted memory in confidential VMs. Developers need to use specialized debugging approaches that operate within the confidential boundary.
Attestation infrastructure. Setting up and managing the attestation workflow — verifying that a confidential VM is running the expected code on genuine hardware with active protections — requires additional infrastructure and operational procedures. Services like Intel Trust Authority are simplifying this, but most organizations are still building out their attestation capabilities.
The Adoption Trajectory
Confidential computing adoption is following a predictable pattern: early adoption in the most security-sensitive sectors, followed by gradual expansion as the technology matures and costs decrease. The market’s growth from $9.3 billion to over $15 billion in a single year signals that the technology has crossed from pilot projects to production deployments.
Financial services and healthcare are the leading adopters, driven by regulatory requirements and the sensitivity of their data. Banks and insurers are using confidential computing to detect fraud and anti-money-laundering patterns across jurisdictions without exposing personally identifiable information. Healthcare researchers are running federated analytics across institutions without violating HIPAA or GDPR.
Government agencies, particularly defense and intelligence organizations, are also early adopters, motivated by the ability to process classified data on commercial cloud infrastructure with hardware-enforced isolation that doesn’t depend on trusting the cloud provider.
AI companies are an emerging adopter category. As AI models become valuable intellectual property and training data becomes subject to increasingly strict privacy regulations, confidential computing — now extending to GPU-accelerated workloads through NVIDIA’s H100 — provides a mechanism to protect both model weights and training data during the compute-intensive training and inference processes. The combination of CPU TEEs and GPU TEEs creates a complete confidential computing stack for AI.
The mass market adoption inflection point will likely come when confidential computing becomes the default rather than an option — when cloud providers enable memory encryption by default on all instances, with no performance penalty and no additional cost. AMD and Intel are both working toward this goal, with each processor generation reducing the overhead of hardware encryption.
Within three to five years, running cloud workloads without memory encryption will be as unusual as running web traffic without TLS encryption. The “last encryption gap” will be closed not just for the most sensitive workloads, but for all of them.
Advertisement
🧭 Decision Radar (Algeria Lens)
| Dimension | Assessment |
|---|---|
| Relevance for Algeria | Medium-High — Algeria’s Law 18-07 mandates data protection but lacks enforcement mechanisms for data-in-use encryption. As Algerian banks, telecoms, and government agencies move workloads to cloud (including the new AI data center in Oran), confidential computing becomes critical for compliance and trust. |
| Infrastructure Ready? | No — No major cloud provider operates a region in Algeria, so local confidential computing services are unavailable. Algerian organizations using international cloud providers can request confidential VM instances, but this creates a tension with Law 18-07’s local hosting requirements. |
| Skills Available? | No — Confidential computing requires deep expertise in hardware security, attestation workflows, and TEE-aware application architecture. This skillset is extremely scarce globally and essentially absent in Algeria’s current workforce. |
| Action Timeline | 12-24 months — Begin with educational awareness for security teams and cloud architects. Pilot confidential VM deployments for the most sensitive workloads (banking, healthcare) on international cloud providers while advocating for local sovereign cloud options. |
| Key Stakeholders | ANPDP (data protection authority), Bank of Algeria, Ministry of Digitalization, Algerian financial institutions, healthcare IT departments, university cybersecurity programs |
| Decision Type | Educational / Strategic — Organizations should understand the technology now and plan for adoption as Algeria’s cloud infrastructure matures. |
Quick Take: Confidential computing solves a real problem for Algerian organizations handling sensitive data — particularly banks subject to international compliance standards and healthcare institutions managing patient records. The immediate barrier is not the technology itself but the absence of local cloud infrastructure. Algerian IT leaders should be evaluating confidential computing capabilities when selecting international cloud providers, and policymakers should consider confidential computing requirements as part of any future sovereign cloud strategy.
Sources & Further Reading
- Google Cloud Confidential Computing: New Updates for More Hardware Security Options — Google Cloud Blog
- AMD SEV-SNP Technical Documentation — AMD Developer
- Intel TDX Architecture Specification — Intel Developer
- Confidential Computing on NVIDIA H100 GPUs for Secure and Trustworthy AI — NVIDIA Technical Blog
- Enclaive Secures EUR 4.1M to Scale Confidential Computing Across Multi-Cloud — Fintech Global
- New Study Finds Confidential Computing Emerging as Strategic Imperative — Linux Foundation / CCC
- Confidential Computing Consortium: Standards and Specifications — Linux Foundation





Advertisement