← Field Manual

SEC-003

Trusted Execution Environments

When provenance needs to be a proof, not a claim, the answer is in silicon

The Trust Problem in Multi-Party Processing

Consider a scenario that is already common and becoming more so.

A government environmental agency wants to monitor deforestation across a national park using synthetic aperture radar. The SAR data comes from a European satellite operator. The processing is performed by a commercial analytics company running proprietary change detection algorithms on cloud infrastructure operated by a third party. The output, a deforestation alert map, is delivered to the agency, which will use it to allocate enforcement resources.

The agency needs to trust that map. Not in the general sense of "this company seems reputable." In the specific sense of: the SAR data that went into this product was authentic and unmodified. The algorithm that processed it did what it was supposed to do and nothing else. No other data was exfiltrated during processing. The result was not altered between computation and delivery.

In a conventional architecture, none of these properties can be verified by the agency. The data was decrypted on the analytics company's servers. The algorithm ran in an environment controlled by the cloud provider. Both the analytics company and the cloud provider had theoretical access to the plaintext data and the intermediate results. The agency receives a final product and a metadata record asserting that everything was done correctly.

That metadata record is a claim. The agency has no mechanism to independently verify it.


What a TEE Actually Is

A TEE is a region of a processor, sometimes called an enclave, where computation occurs in isolation from all other software on the machine. The isolation is enforced by the CPU hardware itself, not by software. This distinction is fundamental.

Software-based isolation (containers, virtual machines, process sandboxing) depends on the correctness of the operating system, hypervisor, and firmware. A privileged attacker (someone with root access to the host, or a compromised hypervisor) can inspect or modify the contents of any software-isolated environment. The isolation is as strong as the software stack, which is to say: strong against most threats, but not against the operator of the infrastructure.

Hardware-based isolation removes the software stack from the trust boundary. When code runs inside a TEE, the CPU encrypts its memory with keys that are inaccessible to any other process, including the operating system. A root-privileged attacker examining physical memory sees ciphertext. A compromised hypervisor cannot read or modify the enclave's contents. Even the entity that owns and operates the hardware cannot inspect what is happening inside the enclave during execution.

This is the property that matters for geospatial processing: the data processor does not need to be trusted, because the hardware enforces confidentiality and integrity independent of the processor's cooperation.


The Major TEE Implementations

Three TEE architectures dominate the current landscape. They differ in their isolation models, attestation mechanisms, and suitability for different workloads.

Intel SGX (Software Guard Extensions) provides application-level enclaves. A developer partitions their application into trusted and untrusted components. The trusted component runs inside an enclave with a relatively small memory budget, originally limited to 128 or 256 megabytes of encrypted memory (the Enclave Page Cache), later expanded significantly but still constrained relative to total system memory. SGX enclaves are suitable for targeted operations: cryptographic key management, credential processing, small-footprint algorithms. They are less practical for processing large raster datasets that may exceed the EPC.

AMD SEV (Secure Encrypted Virtualization) and its successors SEV-ES and SEV-SNP operate at the virtual machine level. Rather than isolating a single application, SEV encrypts the entire memory of a guest VM with a key managed by a dedicated security processor (the AMD Secure Processor, formerly Platform Security Processor). SEV-SNP (Secure Nested Paging) adds integrity protection, preventing the hypervisor from replaying, remapping, or modifying the guest's memory pages. The isolation boundary is the entire VM, which makes SEV-SNP practical for running unmodified geospatial processing pipelines (GDAL, rasterio, full Python stacks) inside a protected environment without rewriting the application.

ARM TrustZone partitions the processor into two worlds: a Secure World and a Normal World. The Secure World has its own operating system (a Trusted OS), its own memory regions, and its own peripherals. TrustZone is prevalent in mobile and embedded devices, making it relevant for edge processing scenarios (drone-mounted processors, field sensor nodes, ground station equipment) where data needs to be protected at the point of collection rather than in the cloud.

AWS Nitro Enclaves deserve mention as a cloud-specific implementation. Nitro Enclaves create isolated virtual machines on AWS infrastructure with no persistent storage, no network access, and no interactive login. Communication with the enclave occurs only through a constrained vsock channel. Nitro Enclaves use the Nitro Hypervisor's isolation rather than CPU-level memory encryption, but the operational model (an environment the cloud operator cannot inspect) serves the same function for many workloads.

Each implementation makes different tradeoffs between isolation granularity, performance overhead, memory constraints, and attestation capabilities. The choice depends on the workload: SGX for targeted cryptographic operations, SEV-SNP for full VM workloads in the cloud, TrustZone for edge and embedded, Nitro for AWS-native deployments.


Remote Attestation: The Proof Mechanism

Hardware isolation is necessary but not sufficient. A TEE that keeps data confidential but cannot prove what it did is a black box, and a black box is exactly what provenance is supposed to eliminate.

Remote attestation is the mechanism that closes this gap. It allows a remote party to verify, before sending any data, that a specific enclave is running specific code on genuine TEE hardware, and that the enclave has not been tampered with.

The attestation flow works as follows.

The enclave generates a measurement of its own contents at launch: a cryptographic hash of the code loaded into the enclave, the initial data, and the enclave's configuration. This measurement is called the MRENCLAVE (in SGX terminology) or a launch digest (in SEV-SNP). It is deterministic: the same code loaded into the same enclave configuration produces the same measurement, regardless of where or when it runs.

The enclave requests an attestation report from the CPU. The CPU signs the enclave's measurement with a key that is burned into the silicon at manufacture and chained to the processor vendor's certificate authority. In SGX, this is the Provisioning Certification Key rooted in the Intel Attestation Service. In SEV-SNP, the signing chain traces to AMD's Key Distribution Service. The signature cannot be forged without physical compromise of the CPU's fuse-level secrets.

The remote party receives the attestation report and verifies it against the vendor's certificate chain. If the signature is valid, the remote party knows three things: the enclave is running on genuine hardware from a specific vendor, the code inside the enclave matches a known measurement, and the enclave's configuration has not been modified since launch.

Only after successful attestation does the remote party release data or cryptographic keys to the enclave. The data is encrypted to the enclave's public key, decrypted inside the enclave, processed, and the result re-encrypted to the recipient. At no point does the plaintext data exist outside the hardware-protected boundary.

The attestation report is itself a provenance artifact. It is a signed, timestamped assertion that a specific computation was performed by specific code on verified hardware. Attached to the output, it constitutes a cryptographic proof of processing, not a metadata claim, but a hardware-rooted certificate of what happened.


Applying TEEs to Geospatial Workflows

The geospatial processing pipeline has several points where TEE-based isolation and attestation provide capabilities that software cannot.

Data ingestion and decryption. Satellite data delivered encrypted can be decrypted only inside a TEE. The cloud operator, the analytics company's system administrators, and any lateral attacker who compromises the host never see the plaintext data. This is particularly relevant for restricted-distribution imagery: defense and intelligence products, commercial high-resolution data under license restrictions, or data subject to national sovereignty controls.

Transformation and fusion. The core processing (atmospheric correction, geometric correction, resampling, fusion, classification) runs inside the TEE. The attestation report binds the specific algorithm version and parameters to the specific inputs and outputs. A downstream consumer can verify that the advertised processing was actually performed, not merely claimed.

Multi-party computation. This is where TEEs enable workflows that are otherwise impossible. Two organizations that do not trust each other (competing satellite operators, intelligence agencies from different nations, a government regulator and a regulated company) can contribute data to a joint analysis without either party seeing the other's raw input. Both parties verify the enclave's attestation. Both encrypt their data to the enclave. The enclave fuses the inputs and releases only the agreed-upon output. Neither party needs to trust the other, the cloud provider, or the enclave operator. They trust the hardware.

Output certification. The processed result is signed by the enclave before release. The signature, combined with the attestation report, creates a provenance chain from the enclave's verified code to the specific output. Any modification of the output after release — even a single altered pixel value — invalidates the signature.


Performance and Practical Constraints

TEEs are not free. The isolation mechanisms impose overhead, and the constraints they introduce affect system architecture decisions.

Memory encryption overhead. AMD SEV encrypts all guest VM memory using AES-128, performed inline by the memory controller. The throughput penalty is measurable but modest for most workloads, typically single-digit percentage increases in memory-bound computation. The impact on geospatial processing, which tends to be I/O-bound and compute-bound rather than memory-bandwidth-bound, is generally acceptable.

Context switching and enclave transitions. SGX enclaves incur significant overhead on enclave entry and exit (EENTER/EEXIT), on the order of thousands of clock cycles per transition. Workloads that require frequent interaction between trusted and untrusted components pay a substantial performance tax. The mitigation is architectural: minimize transitions by batching operations inside the enclave rather than making fine-grained calls across the boundary.

EPC pressure in SGX. When an SGX enclave's working set exceeds the Enclave Page Cache, pages are encrypted and swapped to untrusted memory. The paging overhead is severe, an order of magnitude or more in some workloads. Large raster datasets that exceed the EPC will hit this wall. SEV-SNP avoids this problem entirely because the entire VM memory is encrypted by default, with no artificial size limit.

Attestation latency. Remote attestation involves network round-trips to the vendor's attestation service (Intel IAS, AMD KDS, or a caching intermediary). This adds latency to workflow initialization. For batch processing of large datasets, the attestation cost is amortized and negligible. For real-time or near-real-time applications (disaster response ingestion pipelines, for instance), the attestation step must be architected into the warm-up phase rather than the critical path.

Side-channel exposure. TEEs protect against direct memory inspection but do not eliminate all side channels. Cache timing attacks, power analysis, and speculative execution vulnerabilities (Spectre, Meltdown, and their descendants) have demonstrated the ability to extract information from enclaves. Mitigations exist (microcode patches, compiler-level countermeasures, constant-time algorithm implementations), but the attack surface is not zero. Threat models must account for this residual exposure, particularly in adversarial multi-tenant cloud environments.


The Attestation-Provenance Convergence

The connection between TEE attestation and data provenance is not incidental. It is structural.

A provenance record answers: what data was used, what processing was applied, when, and by what system. An attestation report answers: what code ran, on what hardware, with what configuration, and produces a signed binding between computation and output. These are the same questions expressed in different vocabularies.

The convergence is this: if every transformation in a geospatial processing pipeline runs inside an attested TEE, and each TEE generates a signed report binding its inputs to its outputs, then the set of attestation reports is the provenance chain. Not a metadata description of the provenance. Not a log entry claiming what happened. A set of cryptographic proofs, each signed by hardware, each independently verifiable, collectively constituting a complete and tamper-evident record of everything that happened between the sensor and the final product.

This is provenance that does not depend on trust. It does not require auditing the processing facility. It does not require contractual guarantees from the data provider. It does not require faith in the integrity of the cloud operator. The proof is in the silicon, and the verification is mathematical.

We are not there yet for most operational geospatial workflows. The gap is partly technological (TEE support in geospatial toolchains is nascent) and partly cultural. The industry has operated on implicit trust for decades, and the infrastructure for explicit verification is still being built. But the trajectory is clear. As geospatial data becomes more consequential, the demand for verifiable provenance will outgrow the supply of institutional trust.


Terminology for Popover Definitions

The following terms appear in this entry and should be linked to their glossary definitions on the site:

  • TEE (Trusted Execution Environment)
  • Enclave — the isolated execution region within a TEE
  • Remote Attestation — the protocol for verifying enclave identity and integrity
  • MRENCLAVE — SGX measurement hash of enclave contents
  • EPC (Enclave Page Cache) — SGX's dedicated encrypted memory region
  • SEV-SNP (Secure Encrypted Virtualization — Secure Nested Paging)
  • Side channel — information leakage through indirect observation (timing, power, cache behavior)
  • GDAL — Geospatial Data Abstraction Library
  • SAR (Synthetic Aperture Radar)
  • Provenance — the complete verifiable record of a dataset's origin and processing history
  • Attestation report — the signed hardware assertion binding computation to output
  • Root of trust — the foundational trusted component (typically hardware) from which a trust chain extends
  • vsock — virtual socket interface for communication with Nitro Enclaves
  • Launch digest — SEV-SNP equivalent of MRENCLAVE; hash of initial VM state
  • Certificate authority chain — hierarchical trust structure from silicon vendor to attestation signature

Further Reading

Intel SGX Developer Reference — The canonical technical reference for SGX enclave development, covering enclave lifecycle, sealing, attestation, and the SDK. intel.com/sgx

AMD SEV-SNP Whitepaper — AMD's technical description of SEV-SNP's threat model, isolation guarantees, and attestation flow. Essential reading for understanding VM-level TEE isolation. amd.com/sev

"SoK: Hardware-supported Trusted Execution Environments" (Sabt, Achemlal, Bouabdallah) — A systematic comparison of TEE architectures, their security guarantees, and their limitations. The most comprehensive academic survey of the field.

Confidential Computing Consortium — The Linux Foundation consortium driving standardization of confidential computing technologies across vendors. confidentialcomputing.io

"A Survey of Published Attacks on Intel SGX" (Nilsson, Bideh, Brorsson) — A catalog of demonstrated side-channel and microarchitectural attacks against SGX, essential for realistic threat modeling.

AWS Nitro Enclaves Documentation — AWS's documentation for Nitro Enclaves, including the attestation model and the KMS integration that enables key release to attested enclaves. docs.aws.amazon.com/enclaves

NIST SP 800-233 (Draft): Trusted Execution Environment for Server Platforms — NIST's emerging guidance on TEE deployment, threat models, and assurance levels for server-side confidential computing.