Blog
Confidential Computing: Securing Data During Processing
By Sashwat Karuvayil — Senior Engineer, IBM India Systems Development Lab, Bengaluru. Leads the open-source contract tooling for IBM Confidential Computing.
When I presented on confidential computing, the framing I kept coming back to was simple: we have spent years getting better at protecting data when it sits on disk and when it moves across networks, but the moment an application needs to compute on that data, the protection story gets more complicated.
That gap matters. Modern systems process personal data, financial data, AI prompts, private models, keys, tokens, transaction records, and business logic in environments that are rarely owned end to end by a single team. Workloads run on cloud platforms, shared Kubernetes clusters, managed services, partner infrastructure, and increasingly across organizational boundaries. At that point, "the database is encrypted" is not the same thing as "the data is protected."
Confidential computing is about closing that gap.
The missing state: data in use
Most security conversations start with two familiar states of data.
Data at rest is data sitting in databases, object storage, disks, backups, snapshots, and related storage systems. The usual controls are encryption, access control, backup encryption, key management, and audit trails.
Data in transit is data moving across networks: client to server, service to service, region to region, or organization to organization. The usual controls are TLS, VPNs, secure APIs, firewalls, service meshes, certificates, and network policy.
Those two states are important, but neither describes what happens when data is actively being used by an application. At some point, the workload has to decrypt the data, load it into memory, process it, and produce a result. That is data in use.
This is the moment confidential computing cares about. The goal is not only to encrypt the storage layer or protect the network path. The goal is to reduce exposure while the computation itself is happening.

What confidential computing means
The Confidential Computing Consortium defines confidential computing as the protection of data in use by performing computation in a hardware-based, attested trusted execution environment.
That definition has three important pieces.
First, the protection is about data in use. Confidential computing is not a replacement for disk encryption or TLS. It extends the protection model to the phase where applications actually do work.
Second, the execution environment is hardware-based. The trust boundary is not just a container namespace, a process boundary, or a virtual machine policy. It is backed by processor or platform features designed to isolate the workload from the rest of the system.
Third, the environment is attested. Attestation lets a relying party verify what kind of environment the workload is running in before trusting it with sensitive data. Without attestation, you are mostly taking the platform's word for it. With attestation, you can build policy around evidence.
Trusted execution environments
A trusted execution environment, or TEE, is an isolated place to run code and process data. The exact implementation differs across vendors and platforms, but the security promise is similar: keep a workload's sensitive memory and execution context separated from other software on the machine.
That matters because the traditional cloud stack has many powerful layers. The host operating system, hypervisor, platform administrators, orchestration layer, and adjacent workloads may all have some kind of operational control over the infrastructure. Confidential computing tries to narrow what those layers can see or tamper with.
In practice, a TEE gives you a stronger answer to questions like:
- Can the workload run without exposing secrets to the host?
- Can a customer verify the environment before sending sensitive input?
- Can a platform operator manage infrastructure without automatically gaining access to workload data?
- Can two parties collaborate on computation without one party handing raw data to the other?
The technology does not make a bad application safe by itself. It does not remove the need for secure coding, dependency hygiene, key management, monitoring, or incident response. But it gives architects a new trust boundary to design around.
Why regulation keeps showing up in this conversation
The business pressure is not abstract. Financial services, telecom, healthcare, public sector, and data-heavy digital businesses are all being pushed toward stronger evidence of protection, resilience, and privacy.
In the presentation, I called out examples such as the EU Digital Operational Resilience Act, India's Digital Personal Data Protection Act, sector expectations from SEBI and TRAI, and ISO 27001-style information security controls. These frameworks are not all saying "use confidential computing." What they are doing is raising the bar around encryption, cryptographic controls, operational resilience, privacy, and accountability.
Confidential computing becomes interesting because it gives teams another control to apply when the risk is not just storage theft or network interception, but exposure during processing.
That distinction is especially relevant when workloads run outside a fully trusted environment. A bank might want to process sensitive analytics in the cloud. A healthcare platform might want to collaborate with a research partner. An AI product might want to protect prompts, embeddings, model weights, or inference data. In all of those cases, the hard question is: who can see the data while the system is doing the work?
Operational assurance versus technical assurance
One useful way to think about confidential computing is as a shift from only operational assurance toward stronger technical assurance.
Operational assurance is based on process: access reviews, background checks, policies, audits, change management, separation of duties, and vendor controls. These are necessary, but they depend on people and organizations behaving correctly.
Technical assurance is based on mechanisms: encryption, isolation, attestation, signing, key release policy, and verifiable runtime measurements. These controls do not remove organizational trust, but they reduce how much trust has to be placed in broad operational promises.
Good systems need both. A confidential workload still needs governance. But if sensitive data can be released only after attestation proves the expected workload is running in the expected environment, the trust model becomes more concrete.
That is the real architectural value: confidential computing gives security teams something measurable to build policy around.
Where it fits
The clearest use cases are the ones where data sensitivity and trust boundaries collide.
Secure containerized workloads are a natural fit. Containers made application packaging and deployment portable, but the host and orchestration layers still matter. Confidential computing can help platform teams run sensitive workloads on shared or cloud infrastructure with a stronger isolation story.
Digital asset and Web3 platforms are another fit because private keys, signing flows, transaction policies, and custody models depend heavily on protecting high-value secrets during execution.
AI and machine learning pipelines are becoming one of the most important areas. Private prompts, sensitive training data, embeddings, and proprietary model weights all create new questions about who can observe what during training, fine-tuning, retrieval, and inference. Confidential computing does not solve every AI security problem, but it is very relevant when sensitive data or models cross platform boundaries.

The pattern is the same in each case: the workload needs to compute on something valuable, and the owner of that value wants stronger guarantees about the environment doing the computation.
The product landscape
The ecosystem is still broad and uneven, but there are several recognizable layers.

At the hardware and platform level, technologies include IBM Secure Execution, AMD SEV-ES and SEV-SNP, Intel SGX and TDX, and the AWS Nitro System. Each makes different tradeoffs around isolation granularity, VM or enclave model, attestation, memory protection, and operational ergonomics.
At the software and cloud-service level, examples include IBM Hyper Protect offerings, AWS Nitro Enclaves, Azure Confidential VMs, Azure confidential containers on AKS, GCP Confidential VMs, and GCP Confidential Space. There are also higher-level platforms and tooling from vendors focused specifically on confidential computing workflows.
The important thing is not to treat these as interchangeable checkboxes. A confidential VM, an enclave, a secure execution environment, and a confidential container platform may all serve different operational models. The right question is not "which one sounds most secure?" It is "what is the trust boundary, what evidence can I verify, and how does this fit the workload?"
How I explain the value
The shortest version is this: confidential computing lets you protect sensitive data closer to the moment of computation.
It gives architects a way to say:
- The data is encrypted when stored.
- The data is protected when transmitted.
- The workload runs inside an isolated, hardware-backed environment.
- The environment can be attested before secrets or sensitive inputs are released.
- Platform operators can manage infrastructure without automatically becoming data insiders.
That last point is the one that changes the design conversation. It makes confidential computing less about a single security feature and more about rethinking who has to be trusted by default.
What to keep in mind
Confidential computing is not magic. It does not eliminate application bugs, supply-chain risk, exposed APIs, weak identity controls, or bad key management. It also introduces new design work: attestation flows, image measurement, policy decisions, secret release, observability constraints, and debugging tradeoffs.
But for the right workloads, it is a meaningful addition to the security architecture. It gives teams a way to reason about data in use with the same seriousness they already bring to data at rest and data in transit.
That is why I think the phrase "data in use" is the best entry point. Once that gap is visible, confidential computing stops sounding like niche hardware terminology and starts looking like the next logical layer in protecting sensitive systems.
FAQ
Frequently asked questions
What problem does confidential computing solve?
Confidential computing protects data while it is being processed. Traditional controls such as disk encryption and TLS protect data at rest and in transit, but data usually has to be decrypted before an application can use it. Confidential computing reduces that exposure by executing workloads inside hardware-backed trusted execution environments.
What is a trusted execution environment?
A trusted execution environment, or TEE, is an isolated execution boundary backed by hardware. It is designed to keep code and data protected from other software on the same machine, including privileged layers such as the host operating system or hypervisor.
Where is confidential computing useful?
It is useful for workloads that process sensitive data across organizational or platform trust boundaries: regulated financial systems, health data processing, secure container workloads, digital asset platforms, and AI or machine learning pipelines that use private data or sensitive models.
Does confidential computing replace encryption?
No. It complements encryption. You still need encryption at rest, encryption in transit, access controls, key management, and operational controls. Confidential computing adds protection for data in use.