Speakers & Talks

         KEYNOTE          
6 pm - 8 pm
Journey towards the Confidential Cloud
Shubhra Sinha
Microsoft, USA
Mark Russinovich
Mark Russinovich
CTO, Microsoft Azure
         KEYNOTE          
6 pm - 8 pm
Welcome Session and Introduction to Confidential Computing

Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention. We'll close with an announcement from Edgeless Systems.

Shubhra Sinha
Microsoft, USA
Felix Schuster
Felix Schuster
Edgeless Systems
Cloud-Native
Confidential Containers: Bringing Confidential Computing to the Kubernetes Workload Masses

The new Confidential Computing security frontier is still out of reach for most cloud-native applications.The Confidential Containers project aims at closing that gap by seamlessly running unmodified Kubernetespod workloads through their own, dedicated Confidential Computing environments.

Description

Confidential Computing expands the cloud threat model into a drastically different paradigm. In a worldwhere more and more cloud native applications run through hybrid clouds, not having to trust your cloudprovider anymore is a very powerful and economically attractive proposal. Unfortunately, the currentconfidential computing cloud offerings and architectures are either limited in scope, workload intrusive orprovide node level isolation only. In contrast, the Confidential Containers open-source project integratesthe Confidential Computing security promise directly into cloud native applications by allowing anyKubernetes pod to run into its own, exclusive trusted execution environment.

This presentation will start with describing the Confidential Containers software architecture. We will showhow it's reusing some of the hardware-virtualization based Kata Containers software stack components tobuild confidential micro-VMs for Kubernetes workloads to run into. We will explain how those micro-VMscan transparently leverage the latest Confidential Computing hardware implementations like Intel TDX,AMD SEV or IBM SE to fully protect pod data while it's in use.

Going into more technical details, we will go through several key components of the ConfidentialContainers software stack like the Attestation Agent, the container image management Rust crates or theKubernetes operator. Overall we will show how those components integrate together to form a softwarearchitecture that verifies and attest tenant workloads which pull and run encrypted container images ontop of encrypted memory only.

The final parts of the presentation will first expand into the project roadmap and where it wants to go afterits initial release. Then we will conclude with a mandatory demo of a Kubernetes pod being run on its owntrusted execution environment, on top of an actual Confidential Computing enabled machine.

Samuel Ortiz
Apple
Apps & Solutions
6 pm - 8 pm
Confidential Computing in German E-Prescription Service

Description

Approximately 500 million medical prescriptions are issued, dispensed, and procured each year in Germany. gematik is legally mandated to develop the involved processes into their digital form within its public digital infrastructure (Telematikinfrastruktur). Due to the staged development of these processes as well as to their variable collaborative nature involving medical professionals, patients, pharmacists, and insurance companies a centralized approach for data processing was chosen since it provides adequate design flexibility. In this setup data protection regulations require any processed medical data to be reliably protected from unauthorized access from within the operating environment of the service. Consequently, the solution is based on Intel SGX as Confidential Computing technology. This talk introduces the solution, focusing on trusted computing base, attestation, and availability requirements.

Shubhra Sinha
Microsoft, USA
Andreas Berg
Gematik
6 pm - 8 pm, Meeting Room A
6 pm - 8 pm, Meeting Room A
Apps & Solutions
Proof of Being Forgotten: Verified Privacy Protection in Confidential Computing Platform

For data owners, whether their data has been erased after use is questionable and needs to be proved even when executing in a TEE. We introduce security proof by verifying that sensitive data only lives inside TEE and is guaranteed of being erased after use. We call it proof of being forgotten.

Description

One main goal of Confidential Computing is to guarantee that the security and privacy of data in use areunder the protection of a hardware-based Trusted Execution Environment (TEE). The Trusted ExecutionEnvironment protects the content (code and data) inside the TEE is not accessible from outside. However,as for data owners, whether their sensitive data has been intendedly or un-intendedly leaked by the codeinside TEE is still questionable and needs to be proved. In this talk, we'd like to introduce the concept ofProof of Being Forgotten (PoBF). What PoBF provides is a security proof. The enclaves with the PoBF canensure users that they have the property that sensitive data only live inside an SGX enclave and will beerased after use. By verifying the property and presenting a report with proof of being forgotten to dataowners, the complete data lifecycle protected by TEE can be strictly controlled, enforced, and auditable.

Mingshen Sun
Baidu
cloud-native
Kubernetes meets confidential computing - the different ways of scaling sensitive workloads

Cloud-native and confidential computing will inevitably grow together. This talk maps the design space for confidential Kubernetes and shows the latest corresponding developments from Edgeless Systems.

Description

Kubernetes is the most popular platform for running workloads at scale in a cloud-native way. With the help of confidential computing, Kubernetes deployments can be made verifiable and can be shielded from various threats. The simplest approach towards "confidential Kubernetes" is to run containers inside enclaves or confidential VMs. While this simple approach may look compelling on the surface, on closer inspection, it does not provide great benefits and leaves important questions unanswered:How to set up confidential connections between containers? How to verify the deployment from the outside? How to scale? How to do updates? How to do disaster recovery?

In this talk, we will map the solution space for confidential Kubernetes and discuss pros and cons of the different approaches. In this context, we will give an introduction to our open-source tool MarbleRun, which is a control plane for SGX-based confidential Kubernetes. We will show how MarbleRun, in conjunction with our other open-source tools EGo and EdgelessDB, can make existing cloud-native apps end-to-end confidential.

We will also discuss the additional design options for confidential Kubernetes that are enabled by confidential VM technologies like AMD SEV, Intel TDX, or AWS Nitro. In this context, we will introduce and demo our upcoming product Constellation, which uses confidential VMs o create "fully confidential" Kubernetes deployments, in which all of Kubernetes runs inside confidential environments. Constellation is an evolution of MarbleRun that strikes a different balance between ease-of-use and TCB size.

Moritz Eckert
Edgeless Systems
Apps & Solutions
SGX-protected Scalable Confidential AI for ADAS Development

Privacy is an important aspect of AI applications. We combine Trusted Execution Environments, a library OS, and a scalable service mesh for confidential computing to achieve these security guarantees for Tensorflow-based inference and training with minimal performance and porting overheads.

Description

Access to data is a crucial requirement for the development of advanced driver-assistance systems (ADAS) based on Artificial Intelligence (AI). However, security threats, strict privacy regulations, and potential loss of Intellectual Property (IP) ownership when collaborating with partners can turn data into a toxic asset (Schneier, 2016): Data leaks can result in huge fines and in damage to brand reputation. An increasingly diverse regulatory landscape imposes significant costs on global companies. Finally, ADAS development requires close collaboration across original equipment manufacturers (OEMs) and suppliers. Protecting IP in such settings is both necessary and challenging.
Privacy-Enhancing Technologies (PETs) can alleviate all these problems by increasing control over data. In this paper, we demonstrate how Trusted Execution Environments (TEEs) can be used to lower the aforementioned risks related to data toxicity in AI pipelines used for ADAS development. Contributions
The three most critical success factors for applying PETs in the automotive domain are low overhead in terms of performance and efficiency, ease of adoption, and the ability to scale. ADAS development projects are major efforts generating infrastructure costs in the order of tens to hundreds of millions. Hence, even moderate efficiency overheads translate into significant cost overhead. Before the advent of Intel 3rd Generation XEON Scalable Processors (Ice Lake), the overhead of SGX protected CPU-based training of a TensorFlow model was up to 3-fold when compared to training on the same CPU without using SGX. In a co-engineering effort, Bosch Research and Intel have been able to effectively eliminate these overheads.

In addition, ADAS development happens on complex infrastructures designed to meet highest demands in terms of storage space and compute power. Major changes to these systems for implementing advanced security measures would be prohibitive in terms of time and effort. We demonstrate that Gramine’s (Tsai, Porter, & Vij, 2017) Lift and Shift approach keeps the effort for porting existing workloads to SGX minimal. Finally, being able to process millions of video sequences consisting of billions of frames in short development cycles necessitates a scalable infrastructure. By using the MarbleRun (Edgeless Systems GmbH, 2021) confidential service mesh, Kubernetes can be transformed into a substrate for confidential computing at scale.

To demonstrate the validity of our approach, Edgeless Systems and Bosch Research jointly implemented a proof-of-concept implementation of an exemplary ADAS pipeline using SGX, MarbleRun and Gramine as part of the Open Bosch venture client program.

Stefan Gehrer
Bosch
Scott Raynor
Intel
Moritz Eckert
Edgeless Systems
low-level magic
AMD Secure Nested Paging with Linux - Development Update

Support for AMD Secure Nested Paging (SNP) for Linux is under heavy development. There is work ongoing to make Linux run as an SNP guest and to host SNP protected virtual machines. I will explain the key concepts of SNP and talk about the ongoing work and the directions being considered to enable SNP support in the Linux kernel and the higher software layers. I will also talk about proposed attestation workflows and their implementation.

Jörg Rödel
Suse
Apps & Solutions
Confidential Computing Governance

Regulated institutions have strong business reasons to invest in confidential computing. As with any new technology, governance takes center stage. This talk explores the vast landscape of considerations involved in provably and securely operationalizing Confidential Computing in the public cloud.

Download the accompanying paper here.

Description

Heavily regulated institutions have a strong interest in strengthening protections around data entrusted to public clouds. Confidential Computing is an area that will be of great interest in this context. Securing data in use raises a significantly larger number of questions around proving the effectiveness of new security guarantees -- significantly more than either securing data-in-transit or data-at-rest.

Curiously, this topic has so far received no attention in CCC, or IETF, or anywhere else that we're aware of.

This talk will propose a taxonomy of confidential computing governance and break the problem space down into several constituent domains, with requirements listed for each. Supply chain and toolchain considerations, controls matrices, control plane governance, attestation and several other topics will be discussed.

Mark Novak
JPMorgan Chase
low-level magic
Transparent Release Process for Releasing Verifiable Binaries

Binary attestation allows a remote machine (e.g., a server) to attest that it is running a particular binary. However, usually, the other party (e.g., a client) is interested in guarantees about properties of the binary. We present a release process that allows checking claims about the binaries.

Description

Project Oak provides a trusted runtime and a generic remote attestation protocol for a server to prove its identity and trustworthiness to its clients. To do this, the server, running inside a trusted execution environment (TEE), sends TEE-provided measurements to the client. These measurements include the cryptographic hash of the server binary signed by the TEE’s key. This is called binary attestation.

However, the cryptographic hash of the binary is not sufficient for making any guarantees about the security and trustworthiness of the binary. What is really desired is semantic remote attestation that allows attestation to the properties of a binary. However these approaches are expensive, as they require running checks (e.g., a test suite) during the attestation handshake.

We propose a release process to fill in this gap by adding transparency to binary attestation. For transparency, the release process publishes all released binaries in a public and externally maintained verifiable log. Once an entry has been added to the log, it can never be removed or changed. So a client, or any other interested party (e.g., a trusted external verifier or auditor), can find the binary in the verifiable log. Finding the binary in the verifiable log is important for the client as it gives the client the possibility to detect, with higher likelihood, if it is interacting with a malicious server. Having a public verifiable log is important as it supports public scrutiny of the binaries.

In addition, we are implementing an ecosystem to provide provenance claims about released binaries. We use SLSA provenance predicates for specifying provenance claims. Every entry in the verifiable log corresponding to a released binary contains a provenance claim, cryptographically signed by the team or organization releasing the binary. The provenance claim specifies the source code and the toolchain for building the binary from source. The provenance details allow reproducing server binaries from the source, and verifying (or more accurately falsifying) security claims about the binaries by inspecting the source, its dependencies, and the build toolchain.

Razieh Behjati
Google
cloud-native
Project Veraison - Verification of Attestation

OSS Project Veraison builds software components that can be used to create Attestation Verification services required to establish that a CC environment is trustworthy. These flexible & extensible components can be used to address multiple Attestation technologies and deployment options.

Description

Establishing that a Confidential Computing environment is trustworthy requires the process of Attestation. Verifying the evidential claims in an attestation report can be a complex process, requiring knowledge of token formats and access to a source of reference data that may only be available from a manufacturing supply chain.

Project Veraison (VERificAtIon of atteStatiON) addresses these complexities by building software components that can be used to create Attestation Verification services.

This session discusses the requirements to determine that an environment is trustworthy, the mechanisms of attestation and how Project Veraison brings consistency to the problems of appraising technology specific attestation reports and connecting to the manufacturing supply chain where the reference values of what is 'good' reside.

Simon Frost
Arm
Thomas Fossati
Arm
Apps & Solutions
From zero to hero: making Confidential Computing accessible

How can we make Confidential Computing accessible, so that developers from all levels can quickly learn and use this technology? In this session, we welcome three Outreachy interns, who had zero knowledge of Confidential Computing, to showcase what they've developed in just a few months.

Description

Implementing state-of-the-art Confidential Computing is complex, right? Developers must understand how Trusted Execution Environments work (whether they are process-based or VM-based), be familiar with the different platforms that support Confidential Computing (such as Intel's SGX or AMD's SEV), and have knowledge of complex concepts such as encryption and attestation.

Enarx, an open source project part of the Confidential Computing Consortium, abstracts all these complexities and makes it really easy for developers from all levels to implement and deploy applications to Trusted Execution Environments.

The Enarx project partnered with Outreachy, a diversity initiative from the Software Freedom Conservancy, to welcome three interns, who had zero knowledge of Confidential Computing. During just a few of months, they learned the basics and started building demos in their favorite language, from simple to more complex.
In this session, they'll have the opportunity to showcase their demos and share what they've learned. Our hope is to demonstrate that Confidential Computing can be made accessible and easy to use by all developers.

Nick Vidal
Profian
cloud-native
Understanding trust relationships for Confidential Computing

Confidential Computing requires trust relationships. What are they, how can you establish them, and what are the possible pitfalls? Our focus will be cloud deployments, but we will look at other environments such as telecom and Edge.

Description

Deploying Confidential Computing workloads is only useful if you can be sure what assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of someof the possible entities include:
- hardware vendors
- CSPs
- workload vendors
- open source communities
- independent software vendors (ISVs)
- attestation providers 

This talk will address how and why trust relationships can be established, the dangers of circular relationships, some of the mechanisms for evaluating them, and what they allow when (and if!) they are set up. It describes the foundations for considering when Confidential Computing makes sense, and when you should mistrust the claims of some of those offering it!

Mike Bursell
Profian
low-level magic
Exploring OSS guest firmware for Confidential VMs

As confidential VMs become a reality, trusted components within the guest such as guest firmware become increasingly relevant for trust and security posture of VM. In this talk, we will focus on our explorations in building “customer managed guest firmware” for increased control and auditability of CVM’s TCB.

Description

Confidential computing developers like flexibility and control over guest TCB because that allows managing what components make up the trusted code base. In a VM these requirements are tricky to meet. In this talk you will learn how in Azure we are enabling new capabilities to help you make a full VM as a Trusted Execution Environment and help your app perform remote attestation with another trusted party in a Linux VM environment with OSS guest firmware options.

Pushkar V. Chitnis
Microsoft
Ragavan Dasarathan
Microsoft
Apps & Solutions
Mystikos Python support with demo of confidential ML inference using PyTorch

In this talk, we present Mystikos project’s progress on Python programing language support and a ML workflow in a cloud environment that preserve the confidentiality of the ML model and the privacy of the inference data even if the cloud provider is not trusted. In addition, we provide demo showing how to protect the data using secret keys stored with Azure Managed HSM, and how to retrieve the keys from MHSM at run time using attestation, and how to use the keys for decryption. We also demonstrate how an application could add the secret provisioning capability with simple configurations.

Description

Confidential ML involves many stakeholders: the owner of the input data, the owner of the inference model, and the owner of the inference results, etc. Porting ML workload to Confidential Computing and managing keys and their retrieval into the Confidential Computing ML application securely and confidentially are challenging for users who have limited understanding of Confidential Computing confidentiality and security. We provide a solution implementing the heavy lifting in Mystikos runtime: the programming language runtime, the attestation, the encryption/decryption, the key provisioning etc., so that users only have to convert their python based ML applications and config their applications with a few lines of JSON code.  While the demo takes advantage of Secure Key Unwrap capability of Azure Managed HSM, the solution is based on an open framework that can be extended to other key vault providers.

Xuejun Yang
Microsoft
Apps & Solutions
Smart Contracts with Confidential Computing for Hyperledger Fabric

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Intel SGX to protect the integrity and confidentiality of Smart Contracts. This talk is a FPC 101 and will showcase the benefits of Confidential Computing in the blockchain space.

Description

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Confidential Computing Technology to protect the integrity and confidentiality of Smart Contracts.

In this talk we will learn what Fabric Private Chaincode is and how it can be used to implement privacy-sensitives use cases for Hyperledger Fabric. Our goal of this talk is to educate developers and architects with all necessary background and first hands-on experience to adopt FPC for their projects.

We start with an introduction of FPC, explaining the basic FPC architecture, security properties, and hardware requirements. We will cover the FPC Chaincode API and the applications integration using the FPC Client SDK.
The highlight of this talk will be a showcase of a new language support feature for Fabric Private Chaincode using the EGo open-source SDK.

Marcus Brandenburger
IBM Research
Apps & Solutions
Using Secure Ledger Technology to Tackle Compliance and Auditing

Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

This session will dive into how secure ledgers provide security and integrity to customers in compliance and auditing related scenarios. Specifically, customers who must maintain a source of truth which remains tamper protected, from everyone. We will also discuss how secure ledgers benefit from confidential computing and open-source.

Shubhra Sinha
Microsoft
Apps & Solutions
Unlock the mysteries of data with confidential computing powered by Intel SGX

We all understand that data sovereignty in highly regulated industries like government, healthcare, and fintech is critical, prohibiting even the most basic data insights because it cannot be moved to a centralized location for collaboration or model training. Confidential computing powered by Intel Software Guard Extensions (Intel SGX) changes all of that. Join us to learn how customers across every industry are gaining insights never before possible.

Laura Martinez
Intel
Apps & Solutions
PCI  compliance with Azure confidential computing

Storing payment data in your e-commerce site may expose your business to challenges for PCI compliance. Azure confidential computing provides a platform for protecting your customer’s financial information at scale.

Stefano Tempesta
Microsoft
Apps & Solutions
"PODfidential" Computing - Protecting Workloads with Cloud Native Scale and Agility

Balancing data privacy, runtime protection with ease and nimbleness of deployments is reality for the current state of confidential computing.
Simplicity of PODs and availability of orchestration for confidential computing, exploring the adoption of Kata POD isolation with protected virtualisation.Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

We are discussing the use of Kata POD isolation with protected virtualisation. Striving for confidential computing with a cloud native model while preserving most of the K8S compliance. This talk will summarise the state of the technical discussion in the industry, discuss solutions and open questions and give a hint into the future of confidential computing with cloud native models.Speed of adoption of confidential computing will to a large extend depend on the ease of use for developers and administrators in incorporating runtime protection into the established technology stack. From UseCases to technology demo the technology team is moving forward.

Stefan Liesche
IBM
James Magowan
IBM
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
No items found.
Foundations
Confidential computing with Always Encrypted using enclaves

See abstract

Imagine a database system that can perform computations on sensitive data without ever having access to the data in plaintext. With such confidential computing capabilities, you could protect your sensitive data from powerful high-privileged but unauthorized users, including cloud operators and DBAs, while preserving database system’s processing power. With Always Encrypted using enclave technologies, this powerful vision has become a reality. Join us for this session to learn about this game-changing technology. We will demonstrate the main benefits of Always Encrypted secure enclaves, discuss the best practices for configuring the feature and address the latest Always Encrypted investments in Azure SQL and other Azure data services.

Pieter Vanhove
Pieter Vanhove
Microsoft
AI
Confidential Neural Computing - hosting generative AI model workloads in a Trusted Execution Environment

See abstract

As generative AI models grow more & more capable, products increasingly want to leverage these models to provide personalized generative experiences for their users. This personalization relies on fine-tuning and running these models with sensitive user data. The sensitivity of this user data motivates the need to train & run these models in a privacy-safe way that provides strong safety guarantees to the user, and earns user trust.

The Confidential Neural Computing project builds an ML framework focused on enabled generative AI training and inference in secure enclaves. In this talk, we give an overview of some of the core components of the Confidential Neural Computing framework, explain how the framework leverages current CPU & GPU confidential computing technologies, and share results from our current & on-going areas of investigation.

Joe Woodworth
Joe Woodworth
Google
Cloud Native
Attestation Strategies for Confidential Containers in Distributed Systems

See abstract

In the ever-evolving landscape of Confidential Computing, Confidential Containers (CoCo) have now established themselves as a robust option for deploying secure workloads. With Microsoft Azure's recent integration of CoCo in their Azure Kubernetes Service (AKS), the technology's transition to mainstream production use is evident. CoCo expertly bridges the gap between Intel SGX Enclaves, which offer tight Trusted Computing Base (TCB) and single-workload isolation, and the simplicity of deploying confidential VMs or clusters.

This presentation delves into the core advancements in CoCo, particularly its adaptation of VM isolation technologies like AMD SEV or Intel TDX for containerized applications. However, with this achievement comes the need to address familiar challenges in securing distributed containerized systems, akin to those faced with SGX. We will outline these issues, focusing on the complexities of attesting distributed applications and securing container-to-container communications. Additionally, we will examine the nuanced role of Kubernetes in orchestrating these workloads, emphasizing the critical balance between keeping the application isolated from the untrusted orchestrator while maintaining the usability and workflows for the operations team.

Our discussion will then pivot to innovative solutions, highlighting the concept of execution policies adopted by the CoCo community for effective workload attestation. By revisiting distributed attestation and orchestration principles from the open-source MarbleRun project initially designed for SGX environments, we will explore their potential application in CoCo contexts.

The session culminates with a practical demonstration. We will showcase the deployment of a distributed microservice application using unmodified confidential containers. This demo will illustrate not only the protection of service-to-service communications but also the comprehensive attestation of the application as a unified entity, all within a managed Kubernetes framework. This talk aims to provide a deep dive into the current state and future possibilities of CoCo, offering attendees insights into tackling real-world challenges in cloud-native Confidential Computing.

Moritz Eckert
Moritz Eckert
Edgeless Systems
Paul Meyer
Paul Meyer
Edgeless Systems
Foundations
Hashicorp Vault without a root token but with TEE authentication

See abstract

This talk is about securing Hashicorp Vault in a TEE, adding a TEE authentication plugin and removing the single key to the kingdom.

While putting Vault in a TEE already has been done, we developed a Vault plugin to enable remote attestation as an authentication mechanism for TEEs. Also additional helper TEEs have been developed to remove the need for the root token or any other admin token to be in possession of a sole admin. Administration tasks are only accepted with an m of n signature scheme via an admin TEE.

In the end, the key store and the applications run in TEEs and exchange the secrets, with no secret ever leaving the TEE space. Everything is attested and no admin has the power to change anything without getting sufficient signatures from other trusted parties.

Harald Hoyer
Harald Hoyer
Matter Labs
Foundations
Lessons learned: Operational security for an Azure hosted confidential computing application

See abstract

Maintaining operational security for an environment hosting confidential computing requires a thoughtful approach to restricting privileged infrastructure access, hardening auditing for control and data plane, and a strategy for network isolation as a defense in depth for identity and services. This talk will walk through the infrastructure tooling and Azure tenant configuration used by a Microsoft internal confidential computing application migrated from an on-prem HSE (Highly Secure Environment). It will highlight the strategies taken with Microsoft Entra ID, Azure DevOps, Azure Networking, and Azure Diagnostics and Monitoring to ensure effective controls for building, deploying, and running our confidential application.

Robert Beyreis
Robert Beyreis
Microsoft
Sponsored Talk
Attestation for Hosted Workloads

See abstract

Evervault Enclaves are a hosted confidential computing platform, which allows our customers to deploy their Docker images into managed Nitro Enclaves. To keep the environment consistent with more traditional cloud environments, we wanted to allow our customers to define environment variables and secrets to be consumed by their Enclave at startup.

In this talk, we will walk through how we used Attestation to provide a secure start-up handshake for our customers’ Enclaves, ensuring that their secrets are only ever shared with the image that they provide.

Liam Farrelly
Liam Farrelly
Evervault
AI
Confidential AI inference in practice: What's required and how to implement it

See abstract

With Nvidia's H100 AI accelerator becoming available, confidential computing can finally be applied to state-of-the-art AI workloads. Given the tremendous momentum of AI, Confidential AI could well become the "killer app" for confidential computing.

Still, simply running AI workloads on confidential computing-enabled accelerators doesn't provide strong security properties. What is needed is meaningful end-to-end attestation and end-to-end encryption. Only with these, Confidential AI can deliver actual value.

In this talk, we'll map the design space for accelerator-based Confidential AI. We'll focus on AI inference and discuss how the following security goals can be achieved in practice:

Protection of AI workloads against the infrastructure (for secure cloud migration)

Protection of user data from AI SaaS providers

While both goals have compelling use cases, (2) is significantly more difficult to achieve than (1). We'll explain why and present a path to achieving (2) in practice based on specific insights around the common structure of AI SaaS. If time permits, we'll close the talk with live demos for both (1) and (2) running on Nvidia H100. These demos will be based on upcoming open-source software by Edgeless Systems.

In summary, our goal is for attendees to take away a good understanding of what is required for actual Confidential AI and how this can be implemented in practice.

Otto Bittner
Otto Bittner
Edgeless Systems
Felix Schuster
Felix Schuster
CEO
Edgeless Systems
Panel
The status quo and potential of Confidential AI

See abstract

OC3 brings back this exciting panel with industry leaders, this time to discuss Confidential AI. The panelists will discuss what is Confidential AI, use cases, technical challenges, regulatory incentives and limits, and make predictions about the future of this technology. Will AI be the “killer app” for confidential computing? When will confidential computing be the standard for AI?

The panel will be moderated by Felix Schuster.

Ian Buck
Ian Buck
VP of Hyperscale and HPC
NVIDIA
Greg Lavender
Greg Lavender
CTO & EVP
Intel
Mark Papermaster
Mark Papermaster
CTO & EVP
AMD
Mark Russinovich
Mark Russinovich
CTO
Microsoft Azure
Foundations
Evolution of the Arm Confidential Compute Architecture, and how Arm is supporting ecosystem developers

See abstract

This talk will outline upcoming features in the Arm Confidential Compute Architecture (Arm CCA), and the learning resources offered by Arm to users of CCA and of the Realm Management Extension (RME).

Realm Management Extension (RME) is an extension in Arm's A-profile architecture, introduced in Armv9.3-A, which enables confidential computing on Arm.

Realms are protected execution environments, which facilitate the key features and processes of confidential computing, including isolation and attestation. Realms can be created using an optional reference software design, called Arm Confidential Compute Architecture (Arm CCA). Arm CCA is part of a series of hardware and software architecture innovations that comprise a comprehensive package of support for confidential compute.

Upcoming features in Arm CCA include:

Extending the boundary of the confidential computing environment beyond the SoC, by adding support for assignment of device functions to Realms.

Enabling more flexibility in the design and deployment of confidential computing software stacks, by providing additional privilege separation within Realms.

Ensuring serviceability of the underlying platform, without compromising the security guarantees provided to end users.

With RME-enabled hardware soon to be available from Arm ecosystem partners to developers everywhere, Arm is helping developers to prepare for this new exciting capability. This support extends to producing end-to-end demonstrations of Arm CCA (including endorsement and verification) so that such flows can be easily deployed in real world applications. We are also launching a range of easily digestible educational pieces for developers, part of our Learning Paths on learn.arm.com, which focus on the key jobs developers need to do to harness state-of-the-art technologies from Arm and partners. Our first Learning Paths for RME, for instance, include 'Get Started with RME' and 'Learn How to Create a Virtual Machine Using Arm Confidential Compute' (links below).

In this interactive session, find out about our roadmap for professional learning content and help us shape the support that we offer developers.

Get Started With Realm Management Extension (RME): https://learn.arm.com/learning-paths/cross-platform/cca_rme/

Learn how to create a virtual machine in a Realm using Arm Confidential Compute Architecture (CCA): https://learn.arm.com/learning-paths/servers-and-cloud-computing/rme-cca-basics/

Gareth Stockwell
Gareth Stockwell
Arm
Nick Sample
Nick Sample
Arm
Paul Howard
Paul Howard
Arm
Attestation
Using TDISP to Extend Attestation to Devices connected to a Trusted Execution Environment

See abstract

Keeping data safe within the confines of a Trusted Execution Environment is typically conceptualized as keeping data withing a boundary that encircles the CPU and any memory that holds the data.  The data are protected by an encryption key held by the CPU.  This model functions well when any data that leave the secured boundary (to be analyzed by a GPU, stored on a file server or to traverse a network link are encrypted by a well-protected key prior to exiting the TEE.  

This places a large burden on the CPU which must perform the computations required by the workload in addition to encrypting all data destined to exit the TEE boundary. To alleviate these burdens and allow more CPU cycles to be delivered as a sellable commodity, cloud infrastructure vendors have been focused on the price performance improvements, creating fast pace solutions for remote storage, accelerated networking interfaces, packet spraying and etc. These price performance optimizations significantly improve cloud infrastructure performance. However, these can invalidate some of the Confidential Computing assumptions and create an uncomfortable choice between confidentiality and price performance. Smart NICs in particular have specialized processors that are optimized to perform the encryption required to safely move data across a network. Offloading encryption duties to the NIC allows for much faster I/O through these devices while unburdening the CPU to do more of the intended analysis.  However, before we can give these devices access to unencrypted data so that they can perform they encryption, we must first ask for attestation evidence that the devices are trustworthy.  The Trusted Device Interface Security Protocol (TDISP) allows us to extend the trust we place in a Confidential Computing TEE to devices that connect to the Tee. In this case, we should take the same care to evaluate the attestation evidence presented by the device as we did when attesting the CPU.  The attestation evidence must be based on a hardware root-of-trust and the hardware/firmware/software that has access to the data should provide evidence of trustworthiness.  Any devices that have access to unencrypted data should be held to the same standards of a hardware-based, attested root of trust that is stipulated by the CCC.

Alec Fernandez
Alec Fernandez
Microsoft
Foundations
Tightening side channel protections with Intel SGX AEX-Notify

See abstract

Intel SGX supports the creation of shielded enclaves within unprivileged processes. Code and data within an enclave cannot be read or modified by the operating system or hypervisor, nor by any other software. However, side-channel attacks can be challenging to comprehensively mitigate. This talk will give an overview of AEX-Notify, a new, flexible architecture extension that makes enclaves interrupt-aware: enclaves can register a trusted software handler to be run after an interrupt or exception (such as a fault). AEX-Notify can be used as a building block for implementing countermeasures against different types of interrupt- and fault-based attacks. AEX-Notify is now available on 4th Generation Intel Xeon and newer products with SGX, and is also backward-portable to all older server products via a microcode update. The Intel SGX SDK for Linux now supports a default trusted software handler that mitigates attacks which use interrupts or exceptions to exert fine-grained control over enclave execution, for example, by forcing a single enclave instruction to execute each time the enclave is entered

Scott Constable
Scott Constable
Intel
Apps & Solutions
An Architecture Reference for Confidential Decentralized Identity

See abstract

The identity & credentials economy has been relatively stable for a very long period. For hundreds of years, it has been a quasi institutional monopoly centered around governments and universities, and with relatively little change and innovation. But recently, this industry has begun to be disrupted by the globalization of higher education and labor markets, requiring credentials to work over a much larger scale. The increasing need for access to digital services, which require multiple forms of digital identity, for digital wallets, online banking, social media accounts, etc. carries significant risk, if not thoughtfully designed and carefully implemented.

A new form of identity is needed, one that weaves together technologies and standards to deliver key identity attributes, such as self-ownership and censorship resistance, that are difficult to achieve with existing systems. Cryptographically secure, decentralized identity systems could provide greater privacy protection for users, while also allowing for portability and verifiability.

This session describes an architecture reference for Decentralized Identities (DIDs) that fits in the self-sovereign identity (SSI) framework of issuer - holder - verifier process. The solution architecture demonstrates how to make this "trust triangle" trustworthy and confidential. Confidential Identity Hubs, hosted on confidential computing infrastructure in multiple “hubs” around the world create the necessary distributed network for storing and securing identity elements at scale. Security of sensitive data is ensured by the redundancy of the distributed network, and governance remains decentralized by removing sole ownership of the provided infrastructure.

Stefano Tempesta
Stefano Tempesta
Aetlas
Sponsored Talk
The Road Ahead: How Confidential Computing Will Evolve in the 2020s and Beyond

See abstract

As we venture into the 2020s, confidential computing stands at a pivotal moment, with secure primitives and stabilized secure ABIs taking center stage. This talk will explore this exciting evolution, highlighting how foundational technologies like Veraison, Gramine, and Keystone are shaping a more secure digital future, as well as some exciting news for our big impact projects joining the open source community. We'll delve into the Confidential Computing Consortium's mission to foster open collaboration in this field, emphasizing the significance of their projects in advancing data security and privacy. We'll also showcase the consortium's role in steering the future of cybersecurity and how these collective efforts are paving the way for a more protected and trustworthy computing landscape, essential in our increasingly data-driven world.

Sal Kimmich
Sal Kimmich
Confidential Computing Consortium
Keynote
Welcome keynote and introduction to confidential computing

See abstract

Welcome to OC3! During this session, Felix will delineate OC3's schedule and provide an introduction to confidential computing. He will showcase and evaluate the present situation of confidential computing and draw comparisons to last year, highlighting the latest developments, e.g. confidential AI.

Felix Schuster
Felix Schuster
CEO
Edgeless Systems
Foundations
Securely collaborating across multiple cloud providers

See abstract

Secure collaboration across teams and organizations powered by Trusted Execution Environments (TEEs) helps break down silos, unlock business value, and enable previously impractical use cases. However, sensitive data or intellectual property of all collaborators is often distributed among multiple locations or cloud environments, requiring special care to interoperate.

In this talk we’ll show how we freed TEE from data location constraint and allowed it to work with data across multiple CSPs.

Joshua Krstic
Joshua Krstic
Google
Sponsored Talk
End-to-end Confidential Computing in a diverse AI landscape

See abstract

Confidential Computing plays an increasingly important role in protecting AI models and sensitive data. As AI transcends the data center and enhances our lives in remarkable ways, we need primitives to guarantee our data is used responsibly. With Confidential Computing, developers gain new tools to enforce the isolation of applications and their associated data and models across diverse markets. Arm’s existing reach, from your pocket to the cloud and beyond, provides a unique perspective on the evolution of personalized AI.  

In this talk, we will take apart an end-to-end AI use case that will benefit from Confidential Computing and look at the implications to the overall SW architecture. We will emphasize the need for attestation everywhere. We will present best practices and considerations when planning out a deployment. 

Marc Meunier
Marc Meunier
Arm
Attestation
DICE Attestation on AMD SEV-SNP

See abstract

AMD SEV-SNP enables hardware-based attestation, which can be used to provide transparency into VM workloads. But since the attestation provided by the hardware only covers the initial VM state before booting, it does not provide a full view of the identity of the workload.

In this talk we present an implementation of a layered DICE attestation that ties measurements of each boot-stage, from application to kernel to firmware, down to the hardware rooted attestation. This allows clients to effectively verify the whole software stack of the VM workload, not just its initial state.

Juliette Pluto
Juliette Pluto
Google
Ivan Petrov
Ivan Petrov
Google
Foundations
Asterinas: A safe and efficient Rust-based OS kernel for TEE and beyond

See abstract

In the realm of OS kernels, particularly those within VM TEEs, memory safety is a paramount concern. Rust, known for its safety features, aids in developing secure kernels but is not a panacea. Firstly, Rust's unsafe features, such as pointer dereferencing and inline assembly, are necessary for low-level, error-prone tasks, often permeating the codebase. Secondly, the guest kernel in a VM TEE often processes untrusted inputs (over 1500 instances in Linux, per Intel's estimation) from the host (through hypercalls, MMIO, and etc.), posing a risk of exploitable memory safety vulnerabilities.

This leads us to explore how effectively a Rust-based kernel can minimize its Trusted Computing Base (TCB) against memory safety threats, including Iago attacks. Our response is Asterinas: a safe and efficient OS kernel crafted in Rust, offering Linux ABI compatibility. Asterinas introduces a groundbreaking framekernel OS architecture. This design splits the kernel into two distinct halves within the same address space: the Framework and Services. The Framework is the sole domain allowed to utilize unsafe Rust features, providing a high-level, safe and sound API for the Services, which are exclusively developed in safe Rust. The Services are responsible for providing most of the OS functionalities, including enabling all peripheral devices. As the entire kernel resides in the same address space, different parts of the kernel can communicate in the most efficient way.

In this talk, we dive into the design and implementation of Asterinas. We will spotlight the pioneering framekernel OS architecture and show how the kernel is ported to and fortified for Intel TDX. The project is set to enter the open-source domain around early January 2024.

Hongliang Tian
Hongliang Tian
Ant Group
Edmund Song
Edmund Song
Intel
Cloud Native
Confidential Cloud Native Attestation – challenges and opportunities

See abstract

Confidential compute (CC) brings with it tamper resistant registers to measure digital ingredients, akin to what the Trusted Compute Group (TCG)’s TPM 2.0 offers, such as BIOS, firmware, kernel, and beyond. Clouds are varied in their infrastructure. Multiple CC vendors, each potentially with multiple product generations, offering confidential CPUs, GPUs, and other special purpose processing units.  Further, there are at least three flavors of CVM use – whole confidential Kubernetes clusters, launching traditional virtual machine payloads as a CVM using KubeVirt or Virtual Kubelet, or running a confidential container ala CoCo. What should one measure, particularly with confidential clusters where workloads come and go? The trick lies in capturing invariants and keeping them separate to not have a combinatorial explosion of values to register in an attestation service as good values. Further, what is the essence that we must keep invariant to protect the workloads in the various contexts?

In this talk we shall share an overview of the landscape followed by our proposal  to measure invariants in a typed data structure with a summary in the CVM tamper resistant measurement registers and how it supports scalable attestation. It will be illustrated in the context of Intel TDX using established techniques as in CoCo, Linux IMA, dm-verity, CCNP.

Malini Bhandaru
Malini Bhandaru
Intel
Mikko Ylinen
Mikko Ylinen
Intel
Sponsored Talk
Confidential Computing in 2024 – Innovating Secure and Scalable Solutions

See abstract

We are on the cusp of a transformative era. Technical readiness and market momentum will converge in 2024 to accelerate growth and adoption of Confidential Computing.  This session will offer a comprehensive assessment of the industry’s progress as we align with the industry imperatives described in Intel CTO Greg Lavender’s 2023 keynote at OC3.  We will also provide an in-depth look at Intel’s strategic initiatives poised to address remaining adoption barriers and elevate Confidential Computing to new levels of security, performance, and user-friendly scalability.

Anand Pashupathy
Anand Pashupathy
Intel
Apps & Solutions
Case study: University Clinic Freiburg moves to the cloud with Confidential Kubernetes

See abstract

Faced with capacity constraints and the high costs of on-prem infrastructure, University Clinic Freiburg considered moving to the cloud. However, privacy and security concerns from data protection officers initially posed a roadblock. In this interview-style session, Dr. Christian Haverkamp shares the clinic’s journey to a public cloud by leveraging confidential computing. Join the talk to understand how Confidential Kubernetes based on AMD SEV-SNP allowed the clinic to achieve a secure, accelerated cloud migration.

Christian Haverkamp
Christian Haverkamp
University Medical Center Freiburg
Attestation
Verus: Extending Integrity Measurement Architecture for Attestation

See abstract

The integrity measurement architecture (IMA) is a subsystem in the kernel. The IMA can measure files accessed through execve(), mmap(), and open() systems based on user-defined policies. However, current implementation of IMA has some limitations.

Firstly, the current IMA (Integrity Measurement Architecture) is not designed for application-specific attestation, and the current policy introduces a lot of data and code that is unrelated to applications. The inclusion of too many unrelated files in attestation can affect the functionality of attestation. Secondly, The current Linux IMA is not designed for container environments, which leads to practical issues in multi-container environments where measurement lists from multiple containers are mixed together and cannot be separated. Thirdly, The current Linux COCO Attestation is also not designed for container environments, resulting in practical issues in multi-container environments where event logs from multiple containers are mixed together and cannot be separated.

Verus supports attestation policy injection, where it allows each application to inject per-application TCB, thereby better supporting attestation. Meanwhile, Verus addresses integrity measurement issues in multi-container deployment scenarios by adding support for namespaces.

Wenhui Zhang
Wenhui Zhang
ByteDance
Keynote
Confidential Computing and AI - It takes two to tango

See abstract

As AI's potential reshapes the world, the need to secure sensitive training data and AI models becomes paramount. Google, a leader in both AI and confidential computing, unveils its comprehensive approach to securing AI through Confidential Computing. This keynote delves into the challenges of securing AI, emphasizes the inherent link between AI and data privacy, and showcases Google's commitment to building a trustworthy AI future.

Phil Venables
Phil Venables
CISO
Google Cloud
Apps & Solutions
Private Data Exchange – Leveraging Confidential Computing to Combat Human Trafficking and Modern Slavery

See abstract

This session from Hope for Justice, Intel, and Edgeless Systems, will unpack the Private Data Exchange, an exciting innovative project, leveraging confidential computing as a powerful tool in the fight against human trafficking and modern slavery.

Modern slavery takes many forms, and it happens all around the world. It’s a situation of exploitation that victims can’t leave, where they’re controlled by threats, punishment, violence, coercion, or deception. Human trafficking is a form of modern slavery. It relies on the internet and other digital technology to thrive, giving it a staggering global scale and pace. For a sense of the scale, there are an estimated 50million victims worldwide at any given moment.

Organizations like Hope for Justice and Slave-FreeAlliance have joined the effort to find victims, as well as perpetrators. ThePrivate Data Exchange is a innovative project in partnership with Intel andEdgeless, to develop a platform that can encrypt data to protect sensitive information, knowing that behind it are the private lives of people who’ve been abused and traumatized and need protection.

Intel technology enables the Private Data Exchange to leverage Confidential Computing, which processes sensitive data out of viewfrom unauthorized software or system administrators. The data is encrypted and processed in memory, lowering the risk of exposure to the rest of the system, which can compromise it. Confidential Computing relies on hardware-based controls, enabled by Intel SGX enclaves. Whenever victims are rescued, detailsare recorded and encrypted to be stored securely. If another organization rescues another individual with similar or corroborative data, Intel technology helps link the cases and alerts the appropriate agencies it has detected a suspicious patterns.

 This project will enable multiple global organizations to collaborate and share analyses to prevent human trafficking, and respond to situations of exploitation, and ensure victims receive the support they need while shielding their confidential information or regulated data.

Callum Harvie
Callum Harvie
Hope for Justice
Enrique Restoy
Enrique Restoy
Hope for Justice
Sponsored Talk
Moving Microsoft's $25 billion per year credit card processing system to Azure confidential computing

See abstract

Certainly, you know you can move workloads into the cloud, and you've already moved many of them, but should you trust the cloud for your most sensitive data and cryptographic keys? Microsoft trusted Azure confidential computing and Azure Key Vault Managed HSM (Hardware Security Module) with over $25 billion in annual credit card transactions. In this talk, Brad will address the questions of adopting a cloud-native solution, and how that affects risk and compliance for sensitive workloads. Should you migrate your snowflake to a cloud-native solution and trust Azure with your most sensitive secrets? Microsoft did.

Simon Gallagher
Simon Gallagher
Microsoft
Brad Turner
Brad Turner
Microsoft
Attestation
Increasing Trust and Preserving Privacy: Advancing Remote Attestation

See abstract

The growing trend towards confidential computing has presented a significant challenge: making remote attestation, a crucial technology for establishing trust in confidential workloads, easily accessible to application developers. Ideally, leveraging the added transparency and security guarantees of attestation as an authentication mechanism should be simple. However, the current ecosystem often requires engineers to integrate remote attestation at the application layer of the network stack. This task is not only burdensome, diverting attention from core business logic, but also poses privacy risks. Developers are compelled to navigate a complex maze of security protocols, with the ever-present danger of accidentally exposing sensitive information about the devices running these workloads.

Developing secure and easy-to-use building blocks requires a collaborative effort between the open-source and standards communities. The IETF is working on various specifications, which are being complemented by prototypes and formal verification of innovative features. Several Internet protocols, such as TLS, OAuth, ACME, Netconf, and CSR/EST, are already incorporating attestation, and others will follow. A key focus of our work is to integrate privacy-preserving techniques that can mitigate the risks inherent in remote attestation, ensuring that this crucial technology can be utilized in a manner that is both user-friendly and privacy-conscious.

In this presentation, we aim to provide a comprehensive overview of the current standardization and open-source implementation initiatives. The goal is to make it easier for the broader community to access and engage in this new development, which is crucial for the advancement of the remote attestation infrastructure in general and the utilization of confidential computing in particular.

Ionuț Mihalcea
Ionuț Mihalcea
Arm
Thomas Fossati
Thomas Fossati
Linaro
Hannes Tschofenig
Hannes Tschofenig
Apps & Solutions
Navigating Compliance: Leveraging Confidential Compute for DORA-Driven Regulatory Adherence

See abstract

In this speaker session, we will delve into the landscape of regulatory compliance, with a particular emphasis on the latest standards, notably the EU Digital Operational Resilience Act (DORA) requirements.The session will unfold in three parts, each essential for organisations seeking a comprehensive understanding of regulatory requirements and effective implementation.

1. Overview of Newest Regulations with a Focus on DORA Participants will gain insights into the ever-evolving regulatory landscape, exploring the latest developments and placing a special emphasis on the EU Digital Operational Resilience Act (DORA) and its data protection regulations (RTS & ITS 6/7).

2. How Confidential Compute Facilitates data security This segment will explore the pivotal role of Confidential Compute in ensuring data security. Attendees will discover how this technology empowers organizations to safeguard sensitive information, providing a secure foundation upon which to build and maintain a better posture with regulatory standards. specially within the context of newest court rulings und possibility of confidential compute being an enabler for secure and compliant data processing.

3. Technical Implementation StrategiesThe final part of the session will focus on the practical aspects of technical implementation. Attendees will receive actionable insights into the technical measures to address some of the requirements included in the regulation. A small demo of an implemented UseCase within the financial sector will be shown.

Louisa Muschal
Louisa Muschal
IBM
Andrea Corbelli
Andrea Corbelli
IBM
AI
Seamless attestation of Intel TDX and NVIDIA H100 TEEs for Confidential AI

See abstract

AI is now the most significant workload in data centers and the cloud. It’s being embedded into other workloads, used for standalone deployments, and distributed across hybrid clouds and the edge. Many of the demanding AI workloads require hardware acceleration with a GPU.  . Many AI models are considered priceless intellectual property – companies spend millions of dollars building them, and the parameters and model weights are closely guarded secrets.  The data sets used to train these models are also considered highly confidential and can create a competitive advantage. As a result, data and model owners are looking for ways to protect these, not just at-rest and in-transit, but in-use as well.  Confidential Computing is an industry movement to protect sensitive data and code while in use by executing inside a hardware-hardened, attested Trusted Execution Environment (TEE) where code and data can be accessed only by authorized users and software. For AI workloads, this would include the model parameters, and weights, and the training or inferencing data Attestation is an essential process in Confidential Computing where a stakeholder is provided a cryptographic confirmation of the state of a Confidential Computing environment. It asserts that the TEE instantiated is genuine, conforms to their security policies, and is configured exactly as expected.   Attestation is critical to establish trust in the computing platform you’re about to use with your highly sensitive data.

Intel and NVIDIA deliver Confidential Computing technologies that establish independent TEEs on the CPU and GPU, respectively.  For a customer, this presents an attestation challenge, requiring attestation from two different services to gather the evidence needed to verify the trustworthiness of the CPU and GPU TEE’s.  Intel and NVIDIA are collaborating to provide an unified attestation solution for customers to verify the trustworthiness of the CPU and GPU TEEs for Confidential Computing based on Intel® Xeon® processors with Intel® Trust Domain Extensions (Intel TDX) and NVIDIA Tensorcore H100 GPUs.

In this session you will hear from Intel and Nvidia on the TEE architectures, and how we are enabling for seamless attestation of the two TEEs using Intel Trust Authority and Nvidia Remote attestation Service (NRAS).

Raghu Yeluri
Raghu Yeluri
Intel
Michael O'Connor
Michael O'Connor
NVIDIA
Sponsored Talk
End-to-End Encryption with the Split-Trust Encryption Tool

See abstract

How can you cryptographically and verifiably protect your data from unauthorized access when it leaves your premises and lands in the cloud? With the open source Split-Trust Encryption Tool (STET) library, we made it possible to seamlessly encrypt data as it’s sent to the cloud. Using an external key management solution, you have an attestable way to guarantee your data can only be decrypted by a Confidential VM. Moreover, you have the option to split this trust between multiple KMS systems, ensuring no single KMS has enough information to unilaterally access your data.

Jessie Liu
Jessie Liu
Google
foundations
6 pm - 8 pm
News from the COCONUT-SVSM Community

See abstract

The COCONUT-SVSM project was publicly announced last year at the OC3 conference and gained a lot of traction since then. A diverse community has formed around the project and COCONUT is on its way to becoming an official Confidential Computing Consortium project.This session will give an overview of what happened in the COCONUT project over the last year and cover some of the most exciting developments in detail. For that, the session is divided into four parts.In part one Joerg Roedel from SUSE will summarize the developments and of the COCONUT-SVSM project in 2023 and the current status of the project. Also an outlook to what is expected in the coming year will be given.Part two will be led by Claudio Carvalho and Gheorghe Almasi from IBM and it will cover the work on using the COCONUT-SVSM to run a virtual Trusted Platform Module (vTPM). Keylime will be used to demonstrate how the SVSM vTPM can be leveraged to attest the entire lifetime of AMD SNP Confidential VMs.In the third part Oliver Steffen and Stefano Garzarella from Red Hat will describe an approach in which the COCONUT-SVSM carries out the remote attestation during the early boot phase. It then provides a vTPM and a UEFI variable service to the OVMF firmware and the Linux guest OS to secure the rest of the boot process. The remote attestation flow from within the COCONUT-SVSM will be demonstrated using a key broker service.In part four Ralph Waldenmaier from Amazon Web Services will demonstrate how to develop and run COCONUT-SVSM in Amazon EC2 on a bare metal instance. He will explain what bare metal instances are, how they work, how they differ from the existing virtualized SEV-SNP enabled instances and why they are a great fit for COCONUT-SVSM development. Finally, he will provide steps necessary to set up and demo a working COCONUT-SVSM environment on bare metal instances in Amazon EC2.
Jörg Rödel
SUSE
Claudio Carvalho
IBM Research
George Almasi
IBM
Oliver Steffen
Red Hat
Stefano Garzarella
Red Hat
Ralph Waldenmaier
AWS