See abstract
As organizations develop autonomous AI capabilities and new collaboration models, managing your sensitive data becomes essential as these decision-making models operate independently and in real-time.
NVIDIA Confidential Computing ensures AI models— general purpose or agent-specific — are deployed in a secure, isolated environment, in the cloud or on-premises, safeguarding sensitive data at every stage of its journey.
Learn how you can achieve secure AI-driven outcomes from your data assets leveraging NVIDIA Confidential Computing and the powerful primitives it provides for building trusted environments and interactions that are the heart of modern agentic flows.

See abstract
As Generative AI becomes a cornerstone of innovation, ensuring data privacy and intellectual property protection in cloud-based deployments remains a critical challenge. Privatemode addresses this by providing a SaaS platform for Confidential GenAI, leveraging Confidential Computing to deliver end-to-end encryption for inference workloads.
This talk will cover how Privatemode is built on Contrast and Confidential Containers to deliver a secure and scalable platform for GenAI as a service. We will explain how these technologies enable confidential orchestration, ensure workload attestation, and solve challenges like integrating GPUs securely. Key features of Privatemode, such as provider exclusion, support for cutting-edge open-source LLMs, and compatibility with OpenAI’s API, will also be discussed.
This session provides insights into building practical, confidential GenAI solutions and demonstrates how Confidential Computing can enable leveraging GenAI’s potential without compromising confidentiality.
We'll also discuss real-world use cases.

See abstract
Annually, the volume of fraudulent activities in payments continues to rise – prompting banks to ramp up their investments in prevention and compliance measures to safeguard the integrity of the financial system. Recognizing the imperative for industry-wide collaboration, Swift, as a member-driven cooperative, is spearheading efforts to mitigate the impact of fraud through innovative approaches.
In this presentation, we will showcase Swift's groundbreaking initiative to drive industry collaboration in fraud reduction. Leveraging its unparalleled network and community data, Swift is pioneering a foundation model for anomaly detection with unprecedented accuracy and speed. Central to this endeavor is Swift's strategic integration of confidential computing and verifiable attestation, ensuring the highest standards of security and privacy in data and AI collaboration.
By partnering with key technology vendors and leading an Industry Pilot Group comprising the world's largest banks, Swift is tackling some of the toughest challenges that have plagued the industry for decades. This collaborative effort underscores the recognition that no single entity possesses all the answers, but together, industry stakeholders can forge solutions that benefit all.
Attendees will gain invaluable insights into Swift's holistic approach to combating fraud, and how confidential computing serves as a linchpin in enabling secure collaboration among industry players. Join us to discover how this work is championing a global, inclusive economy that prioritizes the interests of end-customers, while maintaining the highest standards of security and privacy.

See abstract
This panel discussion aims to explore the latest advancements in confidential computing, including use cases, regulatory landscapes, industry collaborations, initiatives, and breakthroughs. Moderated by Felix Schuster, the panel promises a comprehensive overview of the field's developments.




See abstract
AI is influencing multiple workstreams, all of which have different requirements and outcomes. The diversity and sensitivity of these workloads is accelerating the move to custom silicon with hardware-based security features. This presentation will review methods of protecting models and data on-Arm, including the use of trusted execution environments and open-source software, with use cases.

See abstract
In this talk, Airbus will demonstrate how mobile data centers in hostile environments can be secured with confidential computing.
See abstract
We're all here because we know about and believe in the technologies behind Confidential Computing and believe that TEEs can change the way that businesses manage and interact with data, but what's next? The answer is Remote Attestation: a must-have part of Confidential Computing that is sometimes treated as an add-on, but which is actually the secret sauce that can change the value proposition for CC. This session discusses why remote attestation matters and gives some concrete examples of how it can provide significant business value.

See abstract
The possibilities for confidential computing are endless, but innovators must be equipped with the right tools, integrated with the systems Enterprises actually use. In this session, Intel will discuss technology roadmaps, software, and services you can use to build the next generation of security and privacy solutions.

See abstract
This session will provide an in-depth look at the latest advancements in Azure Confidential Computing, focusing on technical innovations around AMD SEV-SNP, Intel TDX, and confidential Nvidia GPUs and their real-world applications. Attendees will gain insights about the practical benefits and challenges faced during deployment, with examples from both within and outside Azure. Additionally, we will present technical demos ranging from confidential VMs, containers and confidential AI, including Retrieval-Augmented Generation (RAG) and Azure ML based confidential AI inferencing. We will also explore how Azure Confidential Computing integrates with Microsoft Cloud for Sovereignty (MCFS) to address regulatory and compliance needs. This session will highlight how Azure's confidential computing capabilities ensure data privacy and integrity throughout the application lifecycle, making it possible to develop secure and private applications.

See abstract
Confidential Virtual Machines (CVMs) introduce new security paradigms, necessitating operating systems that are specifically architected to support their unique threat models. Ubuntu Core, Canonical’s immutable, containerized Linux OS, offers an ideal platform for CVMs, addressing both security and operational resilience at scale.
This presentation will delve into the key technical attributes that make Ubuntu Core well-suited for CVMs. We will highlight its architecture, which is built around an immutable, read-only base system, ensuring that the operating environment remains secure from tampering. Through atomic updates, Ubuntu Core ensures that system changes are applied consistently, preventing issues that could arise from partial updates or system corruption. The use of strict application isolation via containerization mitigates the risk of cross-application vulnerabilities, while enabling fine-grained control over what software can access system resources. This approach also enables CVMs to maintain their integrity even in hostile environments where physical access is limited.
Drawing parallels with IoT devices, which share similar constraints such as remote deployment and lack of manual intervention, we will explain how Ubuntu Core’s self-healing capabilities, secure boot, and seamless rollback features offer a critical advantage for CVM environments. The OS is designed to be highly composable, allowing individual system components (such as the kernel, applications, and system services) to be independently updated or rolled back, ensuring minimal disruption and high uptime in sensitive environments.
We will also discuss Ubuntu Core's runtime integrity support, including the use of signed snaps and encrypted channels, which provide real-time verification of system and application integrity. This is essential for CVMs, where runtime integrity and protection from unauthorized access are paramount.
By leveraging Ubuntu Core’s hardened architecture, CVMs can achieve robust security, predictable system behavior, and operational flexibility, making it the ideal foundation for running confidential workloads in both cloud and edge environments. This talk will cover the technical foundations of Ubuntu Core’s suitability for CVMs, demonstrating its potential to meet the unique demands of secure virtualized environments.

See abstract
In the evolving field of confidential computing, Intel's TDX Connect stands out as a transformative framework, enabling seamless integration of Trusted Execution Environments (TEEs) with diverse computing devices. This session provides an end-to-end overview of TDX Connect, exploring its goals, operational lifecycle, and user-centric features such as Trusted Domain Operating System (TD OS) enablement and driver/application integration. Attendees will also learn about Intel's contributions to Linux upstreaming and collaborations with Cloud Service Providers (CSPs) to drive adoption. Join us to discover how TDX Connect is advancing secure computing and unlocking new possibilities for confidential data processing.


See abstract
Trusted Execution Environments in combination with open-source and reproducible builds provide transparency by relying on reviewers to analyze the Trusted Computing Base (TCB). The size of the TCB directly influences the speed at which new releases and bug fixes are deployed to production, since reviews can be time consuming. Hence, low-TCB environments enable transparency and trust to scale.
Crucial security features required to implement a trusted workload, such as end-to-end encryption, can significantly expand the TCB. For example, more general approaches like TLS with an extensive feature set including support for many signature schemes, certificate parsing and session resumption logic may be less ideal for low-TCB environments.
To address this issue we will present a remote attestation scheme that uses the Noise Protocol Framework to create an end-to-end encrypted attested channel between an end-user device and a TEE. Noise Protocol Framework allows minimizing the amount of cryptographic primitives required to establish an encrypted session bound to the attestation evidence.


See abstract
The IETF RATS working group has been designing a standard format (CoRIM) for attestation endorsements so that their behavior in any compliant Attestation Verification Service is fully determined. This talk is going to discuss the appraisal model we've come up with, and how its composable design accommodates a model of endorsement syndication to build the global web of trust from supply chain to service operation governance.

See abstract
Artificial intelligence (AI) is rapidly transforming our world, but it also presents new challenges for data security and privacy. How can we ensure these increasingly powerful models and the sensitive data they rely on are protected throughout their lifecycle?
Join us to explore how confidential computing, powered by Intel CPUs, provides a critical answer. We'll delve into practical applications of confidential computing for securing AI workloads, including how confidential computing ensures privacy and confidentiality, how confidential computing solutions are being used to build and deploy secure AI applications, and how confidential computing can protect sensitive data without sacrificing the performance needed, even for demanding AI tasks.
Join us to unlock the power of confidential AI and learn how to build secure, privacy-preserving AI solutions for the future.


See abstract
In an era where sensitive data is increasingly processed in distributed environments, the need to ensure its protection during processing is critical. The growing reliance on Arm CPUs, known for their energy efficiency and versatility, further underscores the demand for robust and scalable security frameworks. Addressing this challenge, Arm Confidential Compute Architecture (Arm CCA) represents a transformative approach, leveraging hardware-supported encryption and dynamic memory isolation to secure data in use.
This talk explores the evolving landscape of secure processing with a focus on Arm CCA, its security model, practical tools, and real-world applications in domains such as healthcare, finance, and government. The session explores the Arm CCA's Realm environment, which enhances traditional systems like TrustZone by offering advanced secure memory isolation. Using tools like QEMU Virt and FVP, we will demonstrate how developers can prepare for upcoming Armv9-A architecture chips and integrate these capabilities into modern workloads.
The session also highlights safeguarding applications in Realm VMs against malicious hypervisors and ensuring system integrity through remote attestation. By examining a real-world threat scenario, we’ll showcase the potential of confidential computing in mitigating risks and securing workloads in privacy-sensitive contexts.
FUJITSU-MONAKA, Fujitsu’s next-generation 2nm low-power Arm processor, incorporates Arm CCA to redefine secure processing for future hardware systems. Join us to gain technical insights on leveraging Arm CCA to meet the growing need for secure, energy-efficient computing.
This presentation is based on results obtained from a project, JPNP21029, subsidised by the New Energy and Industrial Technology Development Organisation (NEDO).


See abstract
This session delves into NVIDIA’s latest advancements in confidential computing on NVIDIA systems and beyond, focusing on key updates to Attestation Services that expand platform support. Single and multi-GPU attestation, switch and network attestation, along with upcoming capabilities being released this year. Attendees will discover how these advancements fortify security by enabling relying parties to securely validate claims, while also learning about optimized deployment strategies, supported usage patterns, and SLA benchmarks for NVIDIA’s cloud-based services—ensuring robust, scalable solutions for developers and system integrators.

See abstract
We present a comprehensive framework for securely deploying Hugging Face models, particularly large language models (LLMs), within Trusted Execution Environments (TEEs). The framework encompasses the entire process from evidence collection to attestation, ensuring robust protection of AI models. Central to this approach is the Confidential AI Loader, which encrypts models prior to loading them into memory, thereby safeguarding them within the TEE. The preprocessing steps involve generating an AES GCM key, encrypting the AI model, uploading the encrypted model to a model server, and registering the encryption key with a Key Broker Service (KBS) that interfaces with a Key Management Service (KMS). The architecture facilitates the seamless integration of encrypted models and the Hugging Face project into a container, enabling secure execution within the TEE. This methodology ensures that AI models are protected throughout their lifecycle, from preprocessing to deployment, leveraging TEEs to maintain confidentiality and integrity. https://github.com/cc-api/confidential-huggingface-runner.

See abstract
Attestation for Confidential Computing presents a data management problem that spans an entire industry. Multiple entities need to both produce and consume data at various stages in the supply chain, creating many points of interaction across trust boundaries.
To address the complexity of this landscape, we first need to make it understandable. The RATS architecture is a great starting point. But better yet is to have working software and hands-on experiences. Arm and Linaro have been collaborating on an end-to-end experimental platform for attestation, based on components from the Veraison project. We will present a demonstration of this work.
The next step is to reduce fragmentation of solutions in open source-projects that are built for production. The RATS group works extensively to create alignment around data formats and interaction models. We present some recent work in the Confidential Containers project as a case study for how these are being adopted.
In the final part of our presentation, we will look at some future work that focuses on the distribution of endorsements and reference values within the RATS framework. We will show how the Veraison project can once again be the essential proving ground for these next steps towards a harmonised ecosystem for attestation.


See abstract
This talk provides a technical deep dive into the COCONUT-SVSM project, a platform designed to provide secure services for Confidential Virtual Machines. We'll explore the project's architecture, detail significant advancements made over the last year, and discuss current challenges in securing CVM services. The talk will conclude with an overview of planned developments and the project roadmap for the coming year.

See abstract
Microsoft has embraced a different approach that offers flexibility to customers through the use of a “paravisor”. A paravisor executes within the confidential trust boundary and provides the virtualization and device services needed by a general-purpose operating system (OS), enabling existing VM workloads to execute securely without requiring continual service of the OS to take advantage of innovative advances in confidential computing technology. Microsoft developed the first paravisor in the industry (and gave a talk at OC3 2 years ago about this), and for years, we have been enhancing the paravisor offered to Azure customers. This effort now culminates in the release of a new, open source paravisor, called OpenHCL. We will talk about our journey to get here, our goals and future milestones for this new project.

See abstract
Major national health insurance companies and healthcare providers need a solution to support nationwide electronic patient record system. Given the sensitivity of private medical information, the technology infrastructure must ensure data security for millions of citizens. IBM worked with Edgeless Systems to enable Confidential Computing at scale with support from Intel Xeon processors and Intel SGX.


See abstract
Confidential Containers are a key enabler of confidential cloud-native workloads, but integrating them into existing environments can be complex. Contrast addresses this challenge by simplifying the adoption of Confidential Containers, offering a practical, Kubernetes-native solution for Confidential Computing.
In this talk, we will outline how Contrast adds a Confidential Computing layer to existing Kubernetes platforms, enabling policy-based workload attestation, secrets management, and a service mesh for mTLS-based workload authentication without disrupting existing workflows. We will also discuss Contrast’s architecture and its compatibility with hybrid environments, making it suitable for both cloud and bare-metal deployments.
The presentation will highlight real-world use cases and in-production deployments, including securing AI workloads, protecting sensitive financial data, and protecting sensitive information in hostile environments like military battlefields. We’ll also provide a hands-on demo of how Contrast enables confidential application deployment with minimal effort. This session will offer valuable insights for those looking to adopt Confidential Containers and leverage Confidential Computing in practical scenarios.

See abstract
Data theft and leakage, caused by external adversaries and insiders, demonstrate the need for protecting user data. Trusted Execution Environments (TEEs) offer a promising solution by creating secure environments that protect data and code from such threats. The rise of confidential computing on cloud platforms facilitates the deployment of TEE-enabled server applications, which are expected to be widely adopted in web services such as privacy-preserving LLM inference and secure data logging. One key feature is Remote Attestation (RA), which enables integrity verification of a TEE. However, compatibility issues with RA verification arise as no browsers natively support this feature, making prior solutions cumbersome and risky.
To address these challenges, in this talk, we present RA-WEBs (Remote Attestation for Web services), a novel RA protocol designed for high compatibility with the current web ecosystem. RA-WEBs leverages established web mechanisms for immediate deployability, enabling RA verification on existing browsers. We will show preliminary evaluation results and highlight open challenges when introducing RA to the web.
