Speakers & Talks

         KEYNOTE          
6 pm - 8 pm
Journey towards the Confidential Cloud
Shubhra Sinha
Microsoft, USA
Mark Russinovich
Mark Russinovich
CTO, Microsoft Azure
         KEYNOTE          
6 pm - 8 pm
Welcome Session and Introduction to Confidential Computing

Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention. We'll close with an announcement from Edgeless Systems.

Shubhra Sinha
Microsoft, USA
Felix Schuster
Felix Schuster
Edgeless Systems
Cloud-Native
Confidential Containers: Bringing Confidential Computing to the Kubernetes Workload Masses

The new Confidential Computing security frontier is still out of reach for most cloud-native applications.The Confidential Containers project aims at closing that gap by seamlessly running unmodified Kubernetespod workloads through their own, dedicated Confidential Computing environments.

Description

Confidential Computing expands the cloud threat model into a drastically different paradigm. In a worldwhere more and more cloud native applications run through hybrid clouds, not having to trust your cloudprovider anymore is a very powerful and economically attractive proposal. Unfortunately, the currentconfidential computing cloud offerings and architectures are either limited in scope, workload intrusive orprovide node level isolation only. In contrast, the Confidential Containers open-source project integratesthe Confidential Computing security promise directly into cloud native applications by allowing anyKubernetes pod to run into its own, exclusive trusted execution environment.

This presentation will start with describing the Confidential Containers software architecture. We will showhow it's reusing some of the hardware-virtualization based Kata Containers software stack components tobuild confidential micro-VMs for Kubernetes workloads to run into. We will explain how those micro-VMscan transparently leverage the latest Confidential Computing hardware implementations like Intel TDX,AMD SEV or IBM SE to fully protect pod data while it's in use.

Going into more technical details, we will go through several key components of the ConfidentialContainers software stack like the Attestation Agent, the container image management Rust crates or theKubernetes operator. Overall we will show how those components integrate together to form a softwarearchitecture that verifies and attest tenant workloads which pull and run encrypted container images ontop of encrypted memory only.

The final parts of the presentation will first expand into the project roadmap and where it wants to go afterits initial release. Then we will conclude with a mandatory demo of a Kubernetes pod being run on its owntrusted execution environment, on top of an actual Confidential Computing enabled machine.

Samuel Ortiz
Apple
Apps & Solutions
6 pm - 8 pm
Confidential Computing in German E-Prescription Service

Description

Approximately 500 million medical prescriptions are issued, dispensed, and procured each year in Germany. gematik is legally mandated to develop the involved processes into their digital form within its public digital infrastructure (Telematikinfrastruktur). Due to the staged development of these processes as well as to their variable collaborative nature involving medical professionals, patients, pharmacists, and insurance companies a centralized approach for data processing was chosen since it provides adequate design flexibility. In this setup data protection regulations require any processed medical data to be reliably protected from unauthorized access from within the operating environment of the service. Consequently, the solution is based on Intel SGX as Confidential Computing technology. This talk introduces the solution, focusing on trusted computing base, attestation, and availability requirements.

Shubhra Sinha
Microsoft, USA
Andreas Berg
Gematik
6 pm - 8 pm, Meeting Room A
6 pm - 8 pm, Meeting Room A
Apps & Solutions
Proof of Being Forgotten: Verified Privacy Protection in Confidential Computing Platform

For data owners, whether their data has been erased after use is questionable and needs to be proved even when executing in a TEE. We introduce security proof by verifying that sensitive data only lives inside TEE and is guaranteed of being erased after use. We call it proof of being forgotten.

Description

One main goal of Confidential Computing is to guarantee that the security and privacy of data in use areunder the protection of a hardware-based Trusted Execution Environment (TEE). The Trusted ExecutionEnvironment protects the content (code and data) inside the TEE is not accessible from outside. However,as for data owners, whether their sensitive data has been intendedly or un-intendedly leaked by the codeinside TEE is still questionable and needs to be proved. In this talk, we'd like to introduce the concept ofProof of Being Forgotten (PoBF). What PoBF provides is a security proof. The enclaves with the PoBF canensure users that they have the property that sensitive data only live inside an SGX enclave and will beerased after use. By verifying the property and presenting a report with proof of being forgotten to dataowners, the complete data lifecycle protected by TEE can be strictly controlled, enforced, and auditable.

Mingshen Sun
Baidu
cloud-native
Kubernetes meets confidential computing - the different ways of scaling sensitive workloads

Cloud-native and confidential computing will inevitably grow together. This talk maps the design space for confidential Kubernetes and shows the latest corresponding developments from Edgeless Systems.

Description

Kubernetes is the most popular platform for running workloads at scale in a cloud-native way. With the help of confidential computing, Kubernetes deployments can be made verifiable and can be shielded from various threats. The simplest approach towards "confidential Kubernetes" is to run containers inside enclaves or confidential VMs. While this simple approach may look compelling on the surface, on closer inspection, it does not provide great benefits and leaves important questions unanswered:How to set up confidential connections between containers? How to verify the deployment from the outside? How to scale? How to do updates? How to do disaster recovery?

In this talk, we will map the solution space for confidential Kubernetes and discuss pros and cons of the different approaches. In this context, we will give an introduction to our open-source tool MarbleRun, which is a control plane for SGX-based confidential Kubernetes. We will show how MarbleRun, in conjunction with our other open-source tools EGo and EdgelessDB, can make existing cloud-native apps end-to-end confidential.

We will also discuss the additional design options for confidential Kubernetes that are enabled by confidential VM technologies like AMD SEV, Intel TDX, or AWS Nitro. In this context, we will introduce and demo our upcoming product Constellation, which uses confidential VMs o create "fully confidential" Kubernetes deployments, in which all of Kubernetes runs inside confidential environments. Constellation is an evolution of MarbleRun that strikes a different balance between ease-of-use and TCB size.

Moritz Eckert
Edgeless Systems
Apps & Solutions
SGX-protected Scalable Confidential AI for ADAS Development

Privacy is an important aspect of AI applications. We combine Trusted Execution Environments, a library OS, and a scalable service mesh for confidential computing to achieve these security guarantees for Tensorflow-based inference and training with minimal performance and porting overheads.

Description

Access to data is a crucial requirement for the development of advanced driver-assistance systems (ADAS) based on Artificial Intelligence (AI). However, security threats, strict privacy regulations, and potential loss of Intellectual Property (IP) ownership when collaborating with partners can turn data into a toxic asset (Schneier, 2016): Data leaks can result in huge fines and in damage to brand reputation. An increasingly diverse regulatory landscape imposes significant costs on global companies. Finally, ADAS development requires close collaboration across original equipment manufacturers (OEMs) and suppliers. Protecting IP in such settings is both necessary and challenging.
Privacy-Enhancing Technologies (PETs) can alleviate all these problems by increasing control over data. In this paper, we demonstrate how Trusted Execution Environments (TEEs) can be used to lower the aforementioned risks related to data toxicity in AI pipelines used for ADAS development. Contributions
The three most critical success factors for applying PETs in the automotive domain are low overhead in terms of performance and efficiency, ease of adoption, and the ability to scale. ADAS development projects are major efforts generating infrastructure costs in the order of tens to hundreds of millions. Hence, even moderate efficiency overheads translate into significant cost overhead. Before the advent of Intel 3rd Generation XEON Scalable Processors (Ice Lake), the overhead of SGX protected CPU-based training of a TensorFlow model was up to 3-fold when compared to training on the same CPU without using SGX. In a co-engineering effort, Bosch Research and Intel have been able to effectively eliminate these overheads.

In addition, ADAS development happens on complex infrastructures designed to meet highest demands in terms of storage space and compute power. Major changes to these systems for implementing advanced security measures would be prohibitive in terms of time and effort. We demonstrate that Gramine’s (Tsai, Porter, & Vij, 2017) Lift and Shift approach keeps the effort for porting existing workloads to SGX minimal. Finally, being able to process millions of video sequences consisting of billions of frames in short development cycles necessitates a scalable infrastructure. By using the MarbleRun (Edgeless Systems GmbH, 2021) confidential service mesh, Kubernetes can be transformed into a substrate for confidential computing at scale.

To demonstrate the validity of our approach, Edgeless Systems and Bosch Research jointly implemented a proof-of-concept implementation of an exemplary ADAS pipeline using SGX, MarbleRun and Gramine as part of the Open Bosch venture client program.

Stefan Gehrer
Bosch
Scott Raynor
Intel
Moritz Eckert
Edgeless Systems
low-level magic
AMD Secure Nested Paging with Linux - Development Update

Support for AMD Secure Nested Paging (SNP) for Linux is under heavy development. There is work ongoing to make Linux run as an SNP guest and to host SNP protected virtual machines. I will explain the key concepts of SNP and talk about the ongoing work and the directions being considered to enable SNP support in the Linux kernel and the higher software layers. I will also talk about proposed attestation workflows and their implementation.

Jörg Rödel
Suse
Apps & Solutions
Confidential Computing Governance

Regulated institutions have strong business reasons to invest in confidential computing. As with any new technology, governance takes center stage. This talk explores the vast landscape of considerations involved in provably and securely operationalizing Confidential Computing in the public cloud.

Download the accompanying paper here.

Description

Heavily regulated institutions have a strong interest in strengthening protections around data entrusted to public clouds. Confidential Computing is an area that will be of great interest in this context. Securing data in use raises a significantly larger number of questions around proving the effectiveness of new security guarantees -- significantly more than either securing data-in-transit or data-at-rest.

Curiously, this topic has so far received no attention in CCC, or IETF, or anywhere else that we're aware of.

This talk will propose a taxonomy of confidential computing governance and break the problem space down into several constituent domains, with requirements listed for each. Supply chain and toolchain considerations, controls matrices, control plane governance, attestation and several other topics will be discussed.

Mark Novak
JPMorgan Chase
low-level magic
Transparent Release Process for Releasing Verifiable Binaries

Binary attestation allows a remote machine (e.g., a server) to attest that it is running a particular binary. However, usually, the other party (e.g., a client) is interested in guarantees about properties of the binary. We present a release process that allows checking claims about the binaries.

Description

Project Oak provides a trusted runtime and a generic remote attestation protocol for a server to prove its identity and trustworthiness to its clients. To do this, the server, running inside a trusted execution environment (TEE), sends TEE-provided measurements to the client. These measurements include the cryptographic hash of the server binary signed by the TEE’s key. This is called binary attestation.

However, the cryptographic hash of the binary is not sufficient for making any guarantees about the security and trustworthiness of the binary. What is really desired is semantic remote attestation that allows attestation to the properties of a binary. However these approaches are expensive, as they require running checks (e.g., a test suite) during the attestation handshake.

We propose a release process to fill in this gap by adding transparency to binary attestation. For transparency, the release process publishes all released binaries in a public and externally maintained verifiable log. Once an entry has been added to the log, it can never be removed or changed. So a client, or any other interested party (e.g., a trusted external verifier or auditor), can find the binary in the verifiable log. Finding the binary in the verifiable log is important for the client as it gives the client the possibility to detect, with higher likelihood, if it is interacting with a malicious server. Having a public verifiable log is important as it supports public scrutiny of the binaries.

In addition, we are implementing an ecosystem to provide provenance claims about released binaries. We use SLSA provenance predicates for specifying provenance claims. Every entry in the verifiable log corresponding to a released binary contains a provenance claim, cryptographically signed by the team or organization releasing the binary. The provenance claim specifies the source code and the toolchain for building the binary from source. The provenance details allow reproducing server binaries from the source, and verifying (or more accurately falsifying) security claims about the binaries by inspecting the source, its dependencies, and the build toolchain.

Razieh Behjati
Google
cloud-native
Project Veraison - Verification of Attestation

OSS Project Veraison builds software components that can be used to create Attestation Verification services required to establish that a CC environment is trustworthy. These flexible & extensible components can be used to address multiple Attestation technologies and deployment options.

Description

Establishing that a Confidential Computing environment is trustworthy requires the process of Attestation. Verifying the evidential claims in an attestation report can be a complex process, requiring knowledge of token formats and access to a source of reference data that may only be available from a manufacturing supply chain.

Project Veraison (VERificAtIon of atteStatiON) addresses these complexities by building software components that can be used to create Attestation Verification services.

This session discusses the requirements to determine that an environment is trustworthy, the mechanisms of attestation and how Project Veraison brings consistency to the problems of appraising technology specific attestation reports and connecting to the manufacturing supply chain where the reference values of what is 'good' reside.

Simon Frost
Arm
Thomas Fossati
Arm
Apps & Solutions
From zero to hero: making Confidential Computing accessible

How can we make Confidential Computing accessible, so that developers from all levels can quickly learn and use this technology? In this session, we welcome three Outreachy interns, who had zero knowledge of Confidential Computing, to showcase what they've developed in just a few months.

Description

Implementing state-of-the-art Confidential Computing is complex, right? Developers must understand how Trusted Execution Environments work (whether they are process-based or VM-based), be familiar with the different platforms that support Confidential Computing (such as Intel's SGX or AMD's SEV), and have knowledge of complex concepts such as encryption and attestation.

Enarx, an open source project part of the Confidential Computing Consortium, abstracts all these complexities and makes it really easy for developers from all levels to implement and deploy applications to Trusted Execution Environments.

The Enarx project partnered with Outreachy, a diversity initiative from the Software Freedom Conservancy, to welcome three interns, who had zero knowledge of Confidential Computing. During just a few of months, they learned the basics and started building demos in their favorite language, from simple to more complex.
In this session, they'll have the opportunity to showcase their demos and share what they've learned. Our hope is to demonstrate that Confidential Computing can be made accessible and easy to use by all developers.

Nick Vidal
Profian
cloud-native
Understanding trust relationships for Confidential Computing

Confidential Computing requires trust relationships. What are they, how can you establish them, and what are the possible pitfalls? Our focus will be cloud deployments, but we will look at other environments such as telecom and Edge.

Description

Deploying Confidential Computing workloads is only useful if you can be sure what assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of someof the possible entities include:
- hardware vendors
- CSPs
- workload vendors
- open source communities
- independent software vendors (ISVs)
- attestation providers 

This talk will address how and why trust relationships can be established, the dangers of circular relationships, some of the mechanisms for evaluating them, and what they allow when (and if!) they are set up. It describes the foundations for considering when Confidential Computing makes sense, and when you should mistrust the claims of some of those offering it!

Mike Bursell
Profian
low-level magic
Exploring OSS guest firmware for Confidential VMs

As confidential VMs become a reality, trusted components within the guest such as guest firmware become increasingly relevant for trust and security posture of VM. In this talk, we will focus on our explorations in building “customer managed guest firmware” for increased control and auditability of CVM’s TCB.

Description

Confidential computing developers like flexibility and control over guest TCB because that allows managing what components make up the trusted code base. In a VM these requirements are tricky to meet. In this talk you will learn how in Azure we are enabling new capabilities to help you make a full VM as a Trusted Execution Environment and help your app perform remote attestation with another trusted party in a Linux VM environment with OSS guest firmware options.

Pushkar V. Chitnis
Microsoft
Ragavan Dasarathan
Microsoft
Apps & Solutions
Mystikos Python support with demo of confidential ML inference using PyTorch

In this talk, we present Mystikos project’s progress on Python programing language support and a ML workflow in a cloud environment that preserve the confidentiality of the ML model and the privacy of the inference data even if the cloud provider is not trusted. In addition, we provide demo showing how to protect the data using secret keys stored with Azure Managed HSM, and how to retrieve the keys from MHSM at run time using attestation, and how to use the keys for decryption. We also demonstrate how an application could add the secret provisioning capability with simple configurations.

Description

Confidential ML involves many stakeholders: the owner of the input data, the owner of the inference model, and the owner of the inference results, etc. Porting ML workload to Confidential Computing and managing keys and their retrieval into the Confidential Computing ML application securely and confidentially are challenging for users who have limited understanding of Confidential Computing confidentiality and security. We provide a solution implementing the heavy lifting in Mystikos runtime: the programming language runtime, the attestation, the encryption/decryption, the key provisioning etc., so that users only have to convert their python based ML applications and config their applications with a few lines of JSON code.  While the demo takes advantage of Secure Key Unwrap capability of Azure Managed HSM, the solution is based on an open framework that can be extended to other key vault providers.

Xuejun Yang
Microsoft
Apps & Solutions
Smart Contracts with Confidential Computing for Hyperledger Fabric

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Intel SGX to protect the integrity and confidentiality of Smart Contracts. This talk is a FPC 101 and will showcase the benefits of Confidential Computing in the blockchain space.

Description

Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Confidential Computing Technology to protect the integrity and confidentiality of Smart Contracts.

In this talk we will learn what Fabric Private Chaincode is and how it can be used to implement privacy-sensitives use cases for Hyperledger Fabric. Our goal of this talk is to educate developers and architects with all necessary background and first hands-on experience to adopt FPC for their projects.

We start with an introduction of FPC, explaining the basic FPC architecture, security properties, and hardware requirements. We will cover the FPC Chaincode API and the applications integration using the FPC Client SDK.
The highlight of this talk will be a showcase of a new language support feature for Fabric Private Chaincode using the EGo open-source SDK.

Marcus Brandenburger
IBM Research
Apps & Solutions
Using Secure Ledger Technology to Tackle Compliance and Auditing

Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

This session will dive into how secure ledgers provide security and integrity to customers in compliance and auditing related scenarios. Specifically, customers who must maintain a source of truth which remains tamper protected, from everyone. We will also discuss how secure ledgers benefit from confidential computing and open-source.

Shubhra Sinha
Microsoft
Apps & Solutions
Unlock the mysteries of data with confidential computing powered by Intel SGX

We all understand that data sovereignty in highly regulated industries like government, healthcare, and fintech is critical, prohibiting even the most basic data insights because it cannot be moved to a centralized location for collaboration or model training. Confidential computing powered by Intel Software Guard Extensions (Intel SGX) changes all of that. Join us to learn how customers across every industry are gaining insights never before possible.

Laura Martinez
Intel
Apps & Solutions
PCI  compliance with Azure confidential computing

Storing payment data in your e-commerce site may expose your business to challenges for PCI compliance. Azure confidential computing provides a platform for protecting your customer’s financial information at scale.

Stefano Tempesta
Microsoft
Apps & Solutions
"PODfidential" Computing - Protecting Workloads with Cloud Native Scale and Agility

Balancing data privacy, runtime protection with ease and nimbleness of deployments is reality for the current state of confidential computing.
Simplicity of PODs and availability of orchestration for confidential computing, exploring the adoption of Kata POD isolation with protected virtualisation.Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.

Description

We are discussing the use of Kata POD isolation with protected virtualisation. Striving for confidential computing with a cloud native model while preserving most of the K8S compliance. This talk will summarise the state of the technical discussion in the industry, discuss solutions and open questions and give a hint into the future of confidential computing with cloud native models.Speed of adoption of confidential computing will to a large extend depend on the ease of use for developers and administrators in incorporating runtime protection into the established technology stack. From UseCases to technology demo the technology team is moving forward.

Stefan Liesche
IBM
James Magowan
IBM
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
6 pm - 8 pm
No items found.
Attestation
Project Amber - Intel's operator independent, scalable, multi-cloud attestation service
Download slidesWatch on YouTube

See abstract

Project Amber is the code name for Intel’s groundbreaking service/SaaS-based implementation of an independent trust authority that provides attestation of workloads in a public/private multi-cloud environment.

Designed to remotely verify and assert trustworthiness of compute assets such as Trusted Execution Environments (TEEs), devices, Roots of Trust, and more, the service is operationally independent from the Cloud/Edge infrastructure provider hosting the confidential computing workloads. This talk will focus on the end user needs in adopting confidential computing and provide overview of Intel's Project Amber addressing those needs.

Nikhil Deshpande
Nikhil Deshpande
Intel
Raghu Yeluri
Raghu Yeluri
Intel
Foundations
Storage subsystem for hardware TEE based confidential containers
Download slidesWatch on YouTube

See abstract

Hardware based TEE gets supported by more and more processor architectures. Many hardware TEEs, like AMD SEV/SEV-ES/SEV-SNP, IBM PEF and Intel TDX, support virtual machine based confidential computing. These technologies ensure the confidentiality of CPU registers, CPU states, and memory with relatively low overhead. But we still face heavy overhead with storage IO for confidential containers and VMs. This topic will give an introduction about:

1) requirements of confidential container storage subsystems

2) current available storage technologies and some benchmark result

3) container image acceleration technologies for confidential containers.

Jiang Liu
Jiang Liu
Alibaba Cloud
Foundations
Intel Trust Domain Extensions
Download slidesWatch on YouTube

See abstract

Intel recently announced that they are boosting their Confidential Computing portfolio by adding Trust Domain Extensions to our 4th generation Xeon Scalable Processors.

Join Intel's Chief Architect, Simon Johnson, for a technical tour of Intel Trust Domain Extensions.

Simon Johnson
Simon Johnson
Intel
Cloud Native
Wrapping entire Kubernetes clusters into a confidential-computing envelope with Constellation
Download slidesWatch on YouTube

See abstract

Kubernetes is the most popular platform for managing and scaling containerized workloads. It's practically used everywhere. Bringing comprehensive confidential-computing features to Kubernetes is a requirement for confidential computing to become mainstream.

In the first part of this talk, we explain the different approaches for running confidential-computing workloads on Kubernetes. We discuss desirable features and security properties and corresponding challenges that need to be addressed. In the second part, we describe the architecture of our "confidential" Kubernetes distribution Constellation (https://github.com/edgelesssys/constellation). Constellation is open source and uses Confidential VMs (currently AMD SEV, in the future also Intel TDX and Arm CCA) to ensure that entire Kubernetes clusters are runtime-encrypted and shielded from the infrastructure. To the best of our knowledge, Constellation is the only software able to achieve this. We explain how - with the help of rigorous cluster-wide remote attestation, mkosi-based images, and Sigstore - Constellation is able to verify and attest the integrity of an entire cluster. We also explain how we use Cilium, Rook, Ceph and other cloud-native projects to ensure that data is not only encrypted at runtime but also on the wire and in storage. In the third part, we give a live demo of complex real-world applications like GitLab running end-to-end confidential on Constellation in the public cloud.

We close with a discussion of use cases and an overview of future work.

Malte Poll
Malte Poll
Edgeless Systems
Moritz Eckert
Moritz Eckert
Edgeless Systems
Keynote
Confidential computing: from niche to mainstream
Download slidesWatch on YouTube

See abstract

The confidential computing initiative aided by its community has made significant progress by overcoming barriers to accessing sensitive data in a privacy-preserving manner and thereby creating a market opportunity. As the confidential computing market matures and settles into three distinct lanes, the onus lies on the industry to build the technology and market-enablement platforms for deployment at scale for every workload.

During his keynote at OC3, Greg Lavender, Intel’s senior vice president, CTO and general manager of the software and advanced technology group will share his insights on what it will take to shift confidential computing from a niche market to being mainstream.

Greg Lavender
Greg Lavender
CTO
Intel
Attestation
Hardware-backed attestation in TLS
Download slidesWatch on YouTube

See abstract

Authentication of confidential computing applications is a critical yet complex task. PKI-based authentication relies heavily on software to anchor the trustworthiness of workloads, therefore failing to reliably convey the security state of the system in the face of impersonation and persistent attackers. This is most apparent in cases like confidential computing, where the underlying platform is particularly exposed and out of the control of an owner looking for robust security. Hardware features have thus been introduced to enable remotely verifiable “trust metrics” using attestation. Such hardware-backed features provide cryptographic proof of the software stack, and strong guarantees that the cryptographic keys used by the workload are properly protected from exfiltration.

Ionuț Mihalcea
Ionuț Mihalcea
Arm
Foundations
Attesting NVIDIA GPUs in a confidential computing environment
Download slidesWatch on YouTube

See abstract

As confidential computing extends to include NVIDIA GPUs at cloud service providers and in enterprise data centers, attestation features that provide and verify evidence of the trustworthiness of the environment is paramount.

Attend this talk to learn about the Hopper GPU attestation architecture which is used to measure the necessary security posture to protect your valuable data in-use. We will cover topics on how attestation is used throughout the life cycle of the confidential compute session and how to solve the time domain problem of attestation evidence. Also learn how the attestation methodology supports the ETF Remote Attestation (RATS) standard, and so is able to connect to verifier options both inside and outside of your virtual machine environment.

Mark Overby
Mark Overby
NVIDIA
Cloud Native
"Peer pods" - a practical (or cloud-native) confidential computing approach in virtualized environments
Download slidesWatch on YouTube

See abstract

The Confidential Containers project in CNCF has been making great progress in leveraging and enhancing the Kata Containers project to support the use of TEEs to protect the pod boundary when using Kubernetes. A practical challenge to adoption is the need for Bare Metal compute nodes which tends to work against the benefits of using a public cloud, essentially TEE support for nested virtualization is not offered by Cloud providers today.

However TEE based VMs are generally offered by Cloud providers today based on a variety of technology Intel TDX, AMD Sev, IBM Secure Execution. We have taken the simple idea "What if we could use the existing TEE-based VM offerings to deliver Confidential Computing Pods for Cloud Native Workloads?" and developed a solution that will soon be part of the main Confidential Containers project release. We refer to this solution as "Peer Pods" and it combines the Confidential Container approach (building on Kata Containers) with taking advantage of Cloud Service Provider IaaS APIs to provision and manage pods as independent TEE-based VMs.

We will give you a quick demo and overview of the challenges we have solved along the way.

James Magowan
James Magowan
IBM
Steven Horsman
Steven Horsman
IBM
Foundations
Opening the I/O gates with confidential containers
Download slidesWatch on YouTube

See abstract

This presentation will first go through a quick introduction of the TEE-I/O framework and explain how the combination of secured SPDM sessions, PCIe link protections through IDE and TDISP-based trusted device interface (TDI) assignments provide a secure and confidential I/O architecture. Next, we will describe how confidential containers will implement and support that framework. Last but not least, we will look at how TEE-I/O can be seamlessly integrated with the existing Kubernetes device frameworks, the device plugin and the upcoming CDI.

Samuel Ortiz
Samuel Ortiz
Rivos
Jiewen Yao
Jiewen Yao
Intel
Panel
Industry Perspectives: The impact and future of confidential computing 
Download slidesWatch on YouTube

See abstract

This exciting panel discussion has the objective to drive clarity on what confidential computing is and what is not. The panelists will discuss confidential computing's definition, common use cases, technical challenges, and attestation, and make predictions about the future of this technology, especially regarding AI.

The panel will be moderated by Felix Schuster.

Ian Buck
Ian Buck
Vice President of Hyperscale and HPC
NVIDIA
Mark Papermaster
Mark Papermaster
CTO & EVP
AMD
Mark Russinovich
Mark Russinovich
CTO
Microsoft Azure
Greg Lavender
Greg Lavender
CTO
Intel
Foundations
Making PCI devices ready for confidential computing
Download slidesWatch on YouTube

See abstract

This presentation will describe how to build a TEE-IO-ready device firmware to support confidential computing.

For example, in order to maintain the confidentiality of the workload, the TVM and the TEE-IO device must first establish an authenticated secure session to protect the data in transit. Inside the TEE-IO device, the device security manager (DSM) must then isolate device interfaces (e.g. PCIe Virtual Functions) between each other in order for a TVM to safely accept it into its TCB.

The presentation will also introduce multiple industry standards and explain how they are involved with confidential computing. It will describe how DMTF Secure Protocol and Data Model (SPDM), PCI-SIG Component Measurement and Attestation (CMA), Integrity and Data Encryption (IDE), and TEE Device Interface Security Protocol (TDISP) must be combined together in order to safely extend TEEs with devices.

Jiewen Yao
Jiewen Yao
Intel
Samuel Ortiz
Samuel Ortiz
Rivos
Apps & Solutions
Enabling faster AI model training in healthcare with Azure confidential computing
Download slidesWatch on YouTube

See abstract

Learn how drug research organizations like Novartis Biome are accelerating the time to train their AI models by gaining access to pools of patient data that were unattainable before using confidential computing.

Mary Beth Chalk
Mary Beth Chalk
Co-founder
BeeKeeperAI
Vikas Bhatia
Vikas Bhatia
Microsoft
Foundations
Removing our Hyper-V host OS and hypervisor from the Trusted Computing Base (TCB)
Download slidesWatch on YouTube

See abstract

During the last few years, the Microsoft OS team innovated in ways to remove our Hyper-V host OS and hypervisor from the Trusted Computing Base.

In this talk we'll describe how our entire Hyper-V stack evolved to leverage memory encryption, memory integrity and CPU state protection mechanisms enforced by the hardware to be able to protect data against all types of attacks and we will showcase “a day in the life of data throughout our virtualization stack” to cover the technical end-to-end flow from memory to CPU to computation where data itself is protected and inaccessible to our entire Hyper-V stack.

Carolina Perez-Vargas
Carolina Perez-Vargas
Microsoft
Jin Lin
Jin Lin
Microsoft
Sponsored Talk
Lessons learned from production confidential computing customer deployments
Download slidesWatch on YouTube

See abstract

Confidential computing presents a huge opportunity to raise the bar on security to lower risk for any organization processing, storing or managing sensitive code and data in the cloud. However, this requires different approaches to traditional problems, in particular a shift from reactive controls to proactive control methods.

This session will walk through production customer scenarios and reflect on the lessons learned spanning technical architecture, deployment, performance, security policy and trust models to help organizations embrace confidential computing efficiently and with the shortest time to success.

Bobbie Chen
Bobbie Chen
Anjuna
Foundations
Virtual TPM based attestation for Intel Trust Domain Extensions
Download slidesWatch on YouTube

See abstract

In this talk, we will introduce virtual Trust Platform Module (TPM) for Intel Trust Domain Extensions (TDX).

A confidential computing solution requires attestation to ensure the Trusted Execution Environment (TEE) is launched as expected. Currently, the TPM based attestation is widely adopted by the industry. It is beneficial to reuse the existing TPM based attestation framework for confidential computing use case.

We will discuss how to enable a virtual TPM (vTPM) for Intel TDX, including the change in hypervisor (KVM), TEE guest virtual firmware (TDVF), TEE guest OS Kernel (Linux), as well as the virtual TPM instance. The backend attestation service can combine TPM-base attestation for TEE and TD-based attestation for the vTPM instance.

Jiewen Yao
Jiewen Yao
Intel
Attestation
Demystifying remote attestation
Download slidesWatch on YouTube

See abstract

The attestation process is at the core of the security guarantees provided by confidential computing, but each technology provides different security and usability tradeoffs.

In this talk we give an overview of how each confidential computing vendor implemented their respective technologies and attestation primitives, and how the main cloud providers expose them. Finally, we'll also include how the Decentriq platform abstracts over these and the technical and security challenges we have encountered.

Andras Slemmer
Andras Slemmer
Decentriq
Gaëtan Wattiau
Gaëtan Wattiau
Decentriq
Apps & Solutions
Towards the medicine of the future in Bavaria and Germany, one heartbeat at the time with confidential computing
Download slidesWatch on YouTube

See abstract

Since 2018, the Bavarian ministry of health has invested 24.5 million euros in the DigiMed Bayern project with the ambition to create the lighthouse that will guide Germany towards the medicine of the future. By developing a legal framework and a secure environment powered by confidential computing technologies, over one hundred researchers, clinicians, lawyers, and tinkerers from academia and industry across 14 institutions have found a sovereign computing environment to collaborate on sensitive multi-omic medical data. With the common goal to advance research on heart disease, they already published more than 50 scientific publications and developed large-scale studies and smart wearable technologies.

In the first part of this talk, we will focus on the Bavarian Cloud for Health Research (BCHR) which is the cornerstone of the project. Architected around confidential computing technologies and hosted at the top-tier Leibniz Supercomputing Centre (LRZ) in Munich, we will present how the Big Data and Artificial Intelligence team has engineered the BCHR with security and performance for AI/ML workloads in mind. We will showcase heterogeneous workloads running on the OpenStack-based cloud with hundreds of cores at the petabyte scale, as well as its prospective integration in the European cloud GAIA-X.

In the second part of this talk, we will focus on a new axis of research opened by confidential computing in the area of Privacy-Preserving AI. While approaches like Differential Privacy, Secure Multiparty Computation, or Homomorphic Encryption allow parties to collaborate on confidential data, they come at the expense of the model’s utility. We will discuss how TEEs can be repurposed for AI workloads and allow to train models privately, at high velocity, and without reducing the model’s accuracy. The emphasis will be put on computer vision applications with convolutional neural networks, secure inference in TEEs, hardware acceleration with GPUs, and remote attestation of the privacy guarantee.

Florent Dufour
Florent Dufour
Leibniz Supercomputing Centre
Foundations
Container code and configuration integrity with confidential containers on Azure
Download slidesWatch on YouTube

See abstract

Code integrity is an integral part of confidential computing offerings, but with containers there are other sets of configurations that can potentially alter the integrity of the container when its deployed and run within a TEE.

Please join Amar and Pawan from Microsoft and learn more on how we in Azure and Kata community are taking an approach to achieve the full integrity goals of the TEE initiated container environment and its reflection in the guest attestation report. A demonstration of real-world use case where this evidence reflective of container config can be verified remotely before exchanging PII information.

Amar Gowda
Amar Gowda
Principal Product Manager
Microsoft
Pawan Khandavilli
Pawan Khandavilli
Microsoft
Apps & Solutions
MobileCoin Fog: a cloud you can't see through
Download slidesWatch on YouTube

See abstract

A basic challenge in any “privacy coin” is that if, by design, it is hard to determine who owns any given coin, then it also becomes harder for a user to find their own coins. A user may need to do this if they get a new phone, for instance, because their phone was lost and stolen. If they still have their private keys they should still be able to do this, but in most privacy coins this involves downloading and scanning the entire blockchain, which is too CPU and bandwidth intensive to be practical at scale. In cryptocurrencies that are merely pseudonymous, a user can simply tell a server their address and ask for all parts of the blockchain connected to their address. The bandwidth of this is then only proportional to their transaction volume, and the server can use a database to make this lookup efficient.

MobileCoin Fog enables an essentially equivalent user experience, where users only download their own transactions, without compromising user privacy. To achieve this, MobileCoin Fog uses a series of SGX enclaves together with Oblivious RAM data structures.

Oblivious RAM inside of SGX was first proposed by Sasy, Gorbunov and Fletcher, who produced a proof of concept called ZeroTrace in 2017. MobileCoin Fog is the first production-grade, optimized version of this that has been deployed at scale in a webservice to real users. MobileCoin Fog is designed as an "oblivious" web service, wherein the service operator learns as little as possible about the recipient of any transaction, taking into account all information that they can see from running the service, and active attacks that they might mount against enclaves in the system.

Open source cryptographic code, together with open source SGX enclaves, allow us to prove to the user that it is impossible for the service operator to harvest their data when they use the service. This model is inspired by the design of the Signal Contact Discovery service. MobileCoin Fog also resolves a scaling problem that threatens privacy coins:

-Every transaction must be "scanned" by every user using their private keys, in order to find their coins;

-Scanning (typically) involves many elliptic curve operations, which are often on the order of 100 microseconds;

-This means that if you want to scale to 10,000 tx/s, you need at least 1 CPU per user just to keep up with the blockchain.

Fog requires many orders of magnitude fewer machines to achieve essentially the same thing.

Chris Beck
Chris Beck
MobileCoin
Foundations
Trusted in-guest hypervisor services with the secure VM service module
Download slidesWatch on YouTube

See abstract

In virtualization-based Confidential Computing systems, there is a trend to move more hypervisor functionality into the trusted guest context.

It started with register encryption, where big parts of instruction intercept handling went into the guest kernel. The AMD Secure Nested Paging extension introduced the concept of privilege levels (VMPL), which allow moving even more hypervisor functionality into the guest to reduce the attack surface.

This presentation introduces a Secure VM Service Module (SVSM) based on VMPLs to provide trusted services to the guest operating system. The SVSM runs inside the guest context and thus part of the TEE. With the SVSM it is feasible to emulate a trusted TPM device in guest-context and/or implement trusted live migration services. At the end of the presentation, I will discuss some future directions, including possibly running unmodified guest operating systems under AMD Secure Nested Paging.

Jörg Rödel
Jörg Rödel
SUSE
Apps & Solutions
Enabling secure multi-party collaboration with confidential computing
Download slidesWatch on YouTube

See abstract

How can we create a trusted execution environment (TEE) that supports a trust model where the workload author, workload operator, and resource owners are separate, mutually distrusting parties?

We will propose such a system that is designed to release secrets only to authorized workloads and enables secure multi-party collaboration use cases. This system leverages confidential computing, remote attestation and a hardened VM image to help protect the workload from an untrusted workload operator, and provide code integrity, data integrity and data confidentiality guarantees. Finally, we will discuss possible attacks on this system and their mitigations.

Keith Moyer
Keith Moyer
Google
Foundations
Path towards the vision of confidential clouds
Download slidesWatch on YouTube

See abstract

In the next 7–10 years all clouds will be confidential clouds. We will stop qualifying clouds with the word confidential. All clouds, by their very nature, will be confidential clouds. There are many key requirements that must be addressed by the industry over this time frame to operationalize confidential clouds.

In this session we will look at the 7-8 key requirements like pervasive confidential compute infrastructure, ZTA-based attestation service, key management for confidential clouds, benchmarks,  governance, etc. to accomplish the vision of confidential clouds and operationalize them.

Time permitting, we will walk through one or two use-cases, to show the benefits of confidential computing for end customers.

Raghu Yeluri
Raghu Yeluri
Intel
Keynote
Welcome keynote and introduction to confidential computing
Download slidesWatch on YouTube

See abstract

Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention.

Felix Schuster
Felix Schuster
CEO
Edgeless Systems
Keynote
The Confidential Computing Consortium - accelerating the privacy and security of computing
Download slidesWatch on YouTube

See abstract

This talk introduces the Confidential Computing Consortium work and goals.

Ben Fischer
Ben Fischer
Confidential Computing Consortium
Foundations
Customer managed and controlled Trusted Computing Base (TCB) with CVMs on Azure
Download slidesWatch on YouTube

See abstract

Azure Confidential Virtual Machines (CVMs) offers a stronger isolation environment for a guest partition leveraging the TEE (Trusted Execution Environment), currently AMD SEV-SNP based. There are multiple ways to deploy the CVMs in Azure, we are going to discuss one of the latest deployment options referred to as “Customizable/Custom Firmware" CVM deployments. The deployment option is currently in private preview, where Azure offers customers the chance to choose what exactly constitutes their guest TCB (Trusted Computing Base). It enables customers to fully control the operational TCB of their CVMs such as using open-source components to the in-guest system firmware needs, optionally developing in their own protection mechanisms (integrity, encryption, etc..) to the disks, and use the attestation provider/mechanism tied to any KMS (Key Management Service) they would like to. We believe this deployment option is much suitable for a secure workload deployment.

Swamy Shivaganaga Nagaraju
Swamy Shivaganaga Nagaraju
Microsoft
Chris Orsini
Chris Orsini
Microsoft
Apps & Solutions
Recognizing and overcoming obstacles on the path to broad adoption of confidential computing
Download slidesWatch on YouTube

See abstract

Discussion about what lies between the current state of confidential computing marketplace and the future state where this emerging technology can be said to be ready for broad adoption. In addition to listing the current obstacles, I will also discuss the current trends, both contributing and detracting from the adoption goal, and throw in some educated guesses about timelines for when things may fall into place.

We can categorize the obstacles in three classes of obstacles that stand in Confidential Computing's way: and -Regulatory vacuum, -Technology gaps, -Industry fragmentation.

Mark Novak
Mark Novak
Director, Enterprise Security Architecture
JPMorgan Chase & Co.
Cloud Native
Lessons learned: scaling confidential clusters on OpenStack
Download slidesWatch on YouTube

See abstract

Recently, hyperscalers have started offering confidential computing services. However, they’re based on proprietary stacks, tools, and APIs. The concept of confidential computing requires a more holistic approach.

If we want to keep the service provider outside the trust boundary, we need open and transparent platforms that are entirely verifiable. This talk presents a case study for an open-source confidential cloud platform. In the first part of the talk, we’ll introduce the state-of-the-art cloud infrastructure stack and the challenges regarding confidential computing and attestation.

Firstly, we’ll introduce OpenStack and its ecosystem as the most widely deployed open-source cloud stack. In the second part, we’ll share the lessons learned from deploying and configuring OpenStack for the latest generation of confidential computing hardware. Finally, we’ll demonstrate a proof of concept for a confidential Kubernetes platform as a common standard for deploying workloads in the confidential cloud.

Samuel Kunkel
Samuel Kunkel
STACKIT
Moritz Eckert
Moritz Eckert
Edgeless Systems
Sponsored Talk
Lessons from years of building and evangelizing privacy-preserving technologies
Download slidesWatch on YouTube

See abstract

For years, our team has researched, built and evangelized privacy-preserving technologies, including confidential computing. In this session, we’ll share what we’ve learned after thousands of hours of collaboration with academic researchers, regulators, data practitioners, and technology executives. We’ll discuss our conclusions for the most promising technologies for today and tomorrow’s data landscape, and outline our views on the biggest sources of friction for the widespread adoption of privacy-preserving technologies.

Chester Leung
Chester Leung
Opaque Systems
Attestation
Making user-facing remote attestation meaningful for external reviews
Download slidesWatch on YouTube

See abstract

User-facing remote attestation can allow client devices to validate the state of a server (e.g. what software is running on what hardware) before sending the user's data to the server for processing. One of the challenges is that the remote attestation report usually contains a measurement of the execution state of the server as a cryptographic digest that is not easy to interpret.

This talk will give an overview of Project Oak's implementation approach that makes it possible for any external reviewer to find the exact source code that is running on the server by starting just from this single measurement.

Conrad Grobler
Conrad Grobler
Google