See abstract
This panel discussion aims to explore the latest advancements in confidential computing, including use cases, regulatory landscapes, industry collaborations, initiatives, and breakthroughs. Moderated by Felix Schuster, the panel promises a comprehensive overview of the field's developments.




See abstract
When the industry created Confidential Computing as a category, it gravitated towards the deterministic AES-XTS memory encryption due to its low impact on Total Cost of Ownership. Recent academic research has shown that attacks on memory protected with AES-XTS is possible with a hardware interposer. The industry has labelled these attacks as ‘out-of-scope’ as they are beyond the protection that AES-XTS encryption can offer. However, the attacks being ‘out-of-scope’ does not mean we don’t care. In this talk, we will explore the range of possibilities. In particular, we discuss what Intel is working on to harden Confidential Computing against these styles of attacks keeping Total Cost of Ownership as a main goal. This includes DICE-based attestation, which strengthens the trust chain by binding platform identity to hardware roots of trust, making it harder for attackers to spoof or tamper with attestation. Besides, we talk about Intel's Platform Ownership Endorsements (POE) architecture, which can be used to cryptographically verify that a platform belongs to the expected owner.

See abstract
For years, Google Cloud has provided confidential computing capabilities to customers, but the most rigorous testing ground for these technologies is Google itself. In this keynote, Google’s CTO will give an inside perspective on how confidential computing protects Google’s workloads and powers AI innovation. From hardening critical infrastructure and shielding Google’s own IP to enabling secure data collaboration with customers and ensuring data privacy for genAI use cases - learn what confidential computing usage looks like at Google.

See abstract
Every year, the compute used to train frontier models grows roughly 4x. That trend is not slowing down. As models become more capable, they become simultaneously more dangerous to leave unprotected and more deeply embedded in sensitive workflows — a compounding problem that traditional security architectures were never designed to handle. This talk presents Anthropic's research on Confidential Inference: a system design using hardware and cryptographic attestation to ensure model weights and user data are never exposed in plaintext outside a verified enclave. Jason Clinton, Anthropic's Deputy CISO, will walk through the threat model that motivates this work — including what Anthropic has observed from nation-state actors targeting frontier AI companies — the attestation chain and deployment architecture, and why confidential computing is a necessary component of Anthropic's Responsible Scaling Policy as we approach higher safety levels. He will also address the harder, less solved problem on the horizon: as autonomous AI agents proliferate and begin operating with broad access to enterprise data, the probability of exposure compounds toward near-certainty at scale, and the security guarantees we provide must be commensurate with that reality.

See abstract
Recent systemd versions gained various verified boot & TPM features. In this talk I'd like to give an overview on what's already available and how it fits into our bigger vision of a secure operating system. We'll focus in particular on immutable operating systems, remote attestation of them, and the relevance to Confidential Computing scenarios.

See abstract
This talk will show how combining TDX confidentiality guarantees, dynamic measurement, and trusted software build and provisioning practices enables an end-to-end trusted and transparent LLM service pipeline that meets the evolving demands of LLM workloads.

See abstract
Generative AI and Agentic AI are driving an unprecedented wave of innovation, increasingly powering complex business workflows and handling proprietary models and sensitive data—from initial submission to intermediate processing. For agility and scalability, the cloud has become the preferred platform for these applications. However, security remains a critical concern for enterprise users, especially in highly regulated sectors such as healthcare and finance.Ensuring confidentiality and integrity of sensitive assets—such as AI models, user prompts, and intermediate data—must extend beyond storage and transmission to include the compute layer itself. Confidential Computing offers a promising solution to achieve this goal.In this session, we examine a full AI stack built on LLM-d, a disaggregated LLM serving platform that forms the backbone of agentic AI systems, including request routing and KV-cache management. We demonstrate how multi-GPU confidential enclave can operate in confidential mode to prevent data leakage or unauthorized access.

See abstract
Industry recognition of Confidential Computing has risen significantly and there is availability in public and private clouds: what is next? One of the ways to drive demand for CC solutions is via regulations and standards. This session introduces some of the ways this is happening and discusses some of the options for the future - and ways for individuals and organisations to get involved.

See abstract
Modern confidential computing technologies like AMD SEV-SNP and Intel TDX provide a reliable way to isolate guest workload and data in use from the virtualization or cloud infrastructure. Protecting data at rest is, however, not something you get ‘by default’. The task is particularly challenging for traditional operating systems where users expect to get full read/write experience. The good news is that Linux OS already offers a number of great technologies which can be combined to achieve the goal: dm-verity and dm-integrity, LUKS, discoverable disk images and others. Doing it all right, however, is left as an “exercise to the reader”. In particular, the proposed solution must allow for meaningful remote attestation at any time in the lifetime of the guest. The talk will focus on the recent developments in various upstream projects like systemd and dracut which are focused on making full disk encryption consumable by confidential computing guests running in a cloud.


See abstract
Confidential computing is rapidly evolving with Intel TDX, AMD SEV-SNP, and Arm CCA. However, unlike TDX and SEV-SNP, Arm CCA lacks publicly available hardware, making performance evaluation difficult. While Arm's hardware simulation provides functional correctness, it lacks cycle accuracy, forcing researchers to build best-effort performance prototypes by transplanting their CCA-bound implementations onto non-CCA Arm boards and estimating CCA overheads in software. This leads to duplicated efforts, inconsistent comparisons, and high barriers to entry.In this talk, I will present OpenCCA, our open research framework that enables CCA-bound code execution on commodity Arm hardware. OpenCCA systematically adapts the software stack—from bootloader to hypervisor—to emulate CCA operations for performance evaluation while preserving functional correctness. Our approach allows researchers to lift-and-shift implementations from Arm’s simulation to real hardware, providing a framework for performance analysis, even without publicly available Arm CPUs with CCA.I will discuss the key challenges in OpenCCA's design, implementation, and evaluation, demonstrating its effectiveness through life-cycle measurements and case studies inspired by prior CCA research. OpenCCA runs on an affordable Armv8.2 Rockchip RK3588 board ($250), making it a practical and accessible platform for Arm CCA research.

See abstract
In today’s hyper-connected world, cross-company innovation is essential - but often blocked by regulatory friction and the fear of data / IP leakage.Our Hermetik multiparty collaboration platform uses Confidential Computing to enable verifiable, neutral Trusted Collaboration Spaces in which sensitive enterprise-grade workloads can be run securely across organizational boundaries.By integrating the four pillars of trust – shared governance, hardware-enabled security, trust-inducing transparency, and precision control – Hermetik serves as an “operating system” for effective cross-company collaboration.Our technical approach is unique as it allows for the rapid composition of complex collaborative solutions from unmodified building blocks from the vast universe of cloud-native and Kubernetes-native software including those that rely on the trustworthiness of the Kubernetes control plane.With Hermetik, we enable our customers from the automotive industry to accelerate integration cycles by providing a secure centralized cross-company continuous integration system with built-in IP protection.

See abstract
To enable large-scale deployment of Trust Domain Extensions (TDX) in cloud environments, Cloud Service Providers (CSPs) face increasing pressure to optimize system resource usage, especially memory overhead introduced by security metadata. One major contributor to this overhead is the Physical Address Metadata Table (PAMT), which is used to track encryption and ownership information for each physical page. In current TDX implementations, a 16-byte entry is required for every 4KB page, leading to 4GiB per TiB being reserved for PAMT — an unsustainable cost in hyperscale deployments.We (Alibaba & Intel) propose a new dynamic PAMT mode with no compromise on compatibility with TDX. In our observation, tenants generally do not care about the mapping granularity of the backing memory, such as whether it is mapped at 4KB or 2MB granularity. In practice, 4KB mapping are very sparse across the encrypted memory region.

See abstract
Model Context Protocol (MCP) servers are rapidly becoming the standard architecture for connecting AI models to enterprise data sources. Because these servers hold database credentials, API keys, and customer PII, they represent a critical control point in the AI security lifecycle. While much of the industry focuses on prompt injection, a fundamental infrastructure challenge remains: How do you secure the MCP server itself when privileged cloud administrators typically possess access to the underlying host? This talk presents a production-ready security architecture for hardening MCP servers. We focus on patterns that are fundamentally transferable across cloud providers and on-premise environments, using a live deployment on Azure Confidential Containers as the reference implementation.

See abstract
Besides the obvious advantages, LLM applications carry risks when handling sensitive prompts, which in certain use cases may contain confidential data and can be unintentionally disclosed due to errors, attacks, or public interfaces. Providers usually have technical access to prompts and models, which reduces acceptance in certain domains (e.g., the public and the e-health sector). Confidential computing methods offer a solution by running the sensitive system components in encrypted environments, thereby denying operators access to the prompt contents. This in turn enables further use cases in the LLM domain that would previously have not been possible due to data protection and compliance reasons. Also, cloud sovereignty aspects play an important role in the security posture of the resultantant LLM-based application. In our talk, we are going to set the scene for privacy preserving, LLM-based systems, argue for the necessity of provider exclusion in a public cloud setting, as well as outline the possible solution architectural patterns.


See abstract
Trusted Execution Environments (TEEs) are emerging as a powerful complement to cryptographic verification in Web3, enabling scalable, confidential, and verifiable computation across multiple layers of the stack. This talk examines the real adoption paths for TEEs in decentralized systems today—covering their roles in rollup execution, validator and prover infrastructure, consensus protocols, and hybrid TEE–zk designs. We will separate practical engineering realities from hype, outline the current limitations of trusted hardware, and highlight where TEEs provide uniquely strong leverage compared with purely cryptographic approaches.The talk will conclude with a detailed look at how the Trillion.xyz company is architecting its Decentralized EXchange platform using TEEs to enable fast and trustworthy trading. Attendees will gain a clear understanding of where TEEs fit in the modern Web3 stack and how to use them to build trustworthy, high-performance decentralized applications.

See abstract
Confidential Computing is rapidly transforming how sensitive data is processed, offering new ways to protect information even while it is in use. At the heart of this shift are Trusted Execution Environments (TEEs) and their extensions into Confidential Virtual Machines (CVMs). These technologies already power confidential cloud services used in practice, yet their threat models and real-world deployment strategies often diverge—creating critical blind spots for defenders and opportunities for adversaries.This talk will dissect the architectural trade-offs between process-based and VM-based TEEs, highlighting both their strengths and limitations when deployed in hostile or minimally trusted environments. We will explore how providers and enterprises can bridge today’s confidence gap generating verifiable “Proofs of Cloud” that tie workloads to their physical platforms. By doing so, we address long-standing challenges such as replay, attestation proxying, and the implicit trust assumptions baked into cloud-scale TEEs.

See abstract
As web services become more personalized, establishing user trust in privacy promises is critical. Transparency is a powerful way for cloud services to demonstrate their privacy behavior.This talk proposes a mechanism enabling ordinary web clients to verify server transparency without client code changes. We leverage server Confidential Computing technology with existing Certificate Transparency features from the modern Web PKI. We will describe a practical path to standardizing these techniques while maintaining backwards compatibility for existing web clients. Join us to discuss standardizing public software endorsements and trusted server certification to build transparent, privacy-preserving web experiences.

See abstract
Trustee is an open source attestation and resource management service for confidential workloads. Trustee supports a large array of platforms and deployment models. This talk will cover some advanced features which have been recently added or are in progress.Specifically, it will describe how Trustee makes device attestation seamless, how we are introducing a confidential identity framework, and our plan for a unified vTPM abstraction that will work across platforms.Ultimately, our goal is to deliver advanced attestation support in a simple and intuitive way, and to help unify confidential computing around concepts and concrete tools and services. This talk will show how.

See abstract
Since its inception in 2022, the COCONUT-SVSM project has evolved far beyond its original goal of providing a Secure VM Service Module for AMD SEV-SNP confidential VMs. Today, its scope spans multiple emerging use cases, including lightweight paravisors for running unenlightened guest operating systems and Service VMs that deliver trusted functionality outside their own TCB boundaries.This talk will dive into how COCONUT enables these scenarios, highlighting newly added capabilities as well as upcoming features. We will explore how the project is transforming into a versatile confidential-workload platform while preserving the strong CVM-protection guarantees that form its foundation.

See abstract
EC2 offers multiple approaches to running confidential workloads, each protecting different aspects of your compute environment. This session explores AWS's comprehensive confidential computing portfolio, from hardware-based isolation to cryptographic attestation. We'll examine how technologies like Nitro Enclaves, NitroTPM, Memory Encryption, and Attestation protect different layers of your compute stack against varying threat models. Learn the technical differences between these options, their real-world applications, and implementation guidance. Whether you're handling sensitive financial data, protecting intellectual property, or meeting compliance requirements, discover how to choose and deploy the right confidential computing solution for your use case.


See abstract
The "Agentic Web" is rapidly emerging, with the Model Context Protocol (MCP) becoming the de facto standard for connecting LLMs to external tools and data. However, the current ecosystem relies on implicit trust, leaving agents vulnerable to a new taxonomy of threats.In this session, we introduce Attested MCP, a framework that transitions the agent ecosystem from implicit trust to computational trust. We will detail a reference architecture that leverages Confidential Computing to secure the MCP supply chain. Specifically, we will demonstrate how to deploy MCP servers within Trusted Execution Environments and run third-party tools in verifiable WebAssembly sandboxes.

