
Mark Russinovich is Chief Technology Officer and Technical Fellow for Microsoft Azure, Microsoft’s global enterprise-grade cloud platform. A widely recognized expert in distributed systems, operating systems and cybersecurity, Mark earned a Ph.D. in computer engineering from Carnegie Mellon University. He later co-founded Winternals Software, joining Microsoft in 2006 when the company was acquired. Mark is a popular speaker at industry conferences such as Microsoft Ignite, Microsoft Build, and RSA Conference. He has authored several nonfiction and fiction books, including the Microsoft Press Windows Internals book series, Troubleshooting with the Sysinternals Tools, as well as fictional cyber security thrillers Zero Day, Trojan Horse and Rogue Code.
Welcome to OC3! In this session, we'll outline the agenda for the day and give an introduction to confidential computing. We'll assess the current state of the confidential-computing space and compare it to last year's. We'll take a look at key problems that still require attention. We'll close with an announcement from Edgeless Systems.

Felix Schuster is an academic turned startup founder. After his PhD in computer security, he joined Microsoft Research to work four years on the foundations of Azure Confidential Computing, before co-founding Edgeless Systems. The startup’s vision is to build an open-source stack for cloud-native Confidential Computing. Throughout his career, Felix has frequently given technical talks at top-tier conferences, including Usenix Security Symposium, IEEE Symposium on Security & Privacy, and ACM CCS. His 2015 paper on the “VC3” system is believed by some to have coined the term Confidential Computing.
The new Confidential Computing security frontier is still out of reach for most cloud-native applications.The Confidential Containers project aims at closing that gap by seamlessly running unmodified Kubernetespod workloads through their own, dedicated Confidential Computing environments.
Description
Confidential Computing expands the cloud threat model into a drastically different paradigm. In a worldwhere more and more cloud native applications run through hybrid clouds, not having to trust your cloudprovider anymore is a very powerful and economically attractive proposal. Unfortunately, the currentconfidential computing cloud offerings and architectures are either limited in scope, workload intrusive orprovide node level isolation only. In contrast, the Confidential Containers open-source project integratesthe Confidential Computing security promise directly into cloud native applications by allowing anyKubernetes pod to run into its own, exclusive trusted execution environment.
This presentation will start with describing the Confidential Containers software architecture. We will showhow it's reusing some of the hardware-virtualization based Kata Containers software stack components tobuild confidential micro-VMs for Kubernetes workloads to run into. We will explain how those micro-VMscan transparently leverage the latest Confidential Computing hardware implementations like Intel TDX,AMD SEV or IBM SE to fully protect pod data while it's in use.
Going into more technical details, we will go through several key components of the ConfidentialContainers software stack like the Attestation Agent, the container image management Rust crates or theKubernetes operator. Overall we will show how those components integrate together to form a softwarearchitecture that verifies and attest tenant workloads which pull and run encrypted container images ontop of encrypted memory only.
The final parts of the presentation will first expand into the project roadmap and where it wants to go afterits initial release. Then we will conclude with a mandatory demo of a Kubernetes pod being run on its owntrusted execution environment, on top of an actual Confidential Computing enabled machine.

Samuel Ortiz is a software engineer at Apple. He enjoys playing with containers and virtualization, and maintains a few related open source projects like for example Kata Containers. When not messing with software, Samuel runs across mountain trails and builds radio controlled toys.
Description
Approximately 500 million medical prescriptions are issued, dispensed, and procured each year in Germany. gematik is legally mandated to develop the involved processes into their digital form within its public digital infrastructure (Telematikinfrastruktur). Due to the staged development of these processes as well as to their variable collaborative nature involving medical professionals, patients, pharmacists, and insurance companies a centralized approach for data processing was chosen since it provides adequate design flexibility. In this setup data protection regulations require any processed medical data to be reliably protected from unauthorized access from within the operating environment of the service. Consequently, the solution is based on Intel SGX as Confidential Computing technology. This talk introduces the solution, focusing on trusted computing base, attestation, and availability requirements.

Andreas Berg is IT Architect at gematik where he played a leading role in defining thesecurity architecture of the German electronic patient records service (ePA) as well as the e-prescription service (E-Rezept). He has a long-standing interest in Confidential Computing technologies and methods for high assurance and trustworthy IT systems.
For data owners, whether their data has been erased after use is questionable and needs to be proved even when executing in a TEE. We introduce security proof by verifying that sensitive data only lives inside TEE and is guaranteed of being erased after use. We call it proof of being forgotten.
Description
One main goal of Confidential Computing is to guarantee that the security and privacy of data in use areunder the protection of a hardware-based Trusted Execution Environment (TEE). The Trusted ExecutionEnvironment protects the content (code and data) inside the TEE is not accessible from outside. However,as for data owners, whether their sensitive data has been intendedly or un-intendedly leaked by the codeinside TEE is still questionable and needs to be proved. In this talk, we'd like to introduce the concept ofProof of Being Forgotten (PoBF). What PoBF provides is a security proof. The enclaves with the PoBF canensure users that they have the property that sensitive data only live inside an SGX enclave and will beerased after use. By verifying the property and presenting a report with proof of being forgotten to dataowners, the complete data lifecycle protected by TEE can be strictly controlled, enforced, and auditable.

Mingshen Sun is a Staff Security Researcher at Baidu. He leads, maintains and actively contributes to Apache Teaclave (incubating) (a confidential computing platform), and several open source projects. Mingshen regularly gives talks at industry events in security. He also collaborates with academic researchers in multiple interesting research projects on solving real-world problems in industry. His interests lie in the areas of security and privacy, operating systems, and programming languages.
Cloud-native and confidential computing will inevitably grow together. This talk maps the design space for confidential Kubernetes and shows the latest corresponding developments from Edgeless Systems.
Description
Kubernetes is the most popular platform for running workloads at scale in a cloud-native way. With the help of confidential computing, Kubernetes deployments can be made verifiable and can be shielded from various threats. The simplest approach towards "confidential Kubernetes" is to run containers inside enclaves or confidential VMs. While this simple approach may look compelling on the surface, on closer inspection, it does not provide great benefits and leaves important questions unanswered:How to set up confidential connections between containers? How to verify the deployment from the outside? How to scale? How to do updates? How to do disaster recovery?
In this talk, we will map the solution space for confidential Kubernetes and discuss pros and cons of the different approaches. In this context, we will give an introduction to our open-source tool MarbleRun, which is a control plane for SGX-based confidential Kubernetes. We will show how MarbleRun, in conjunction with our other open-source tools EGo and EdgelessDB, can make existing cloud-native apps end-to-end confidential.
We will also discuss the additional design options for confidential Kubernetes that are enabled by confidential VM technologies like AMD SEV, Intel TDX, or AWS Nitro. In this context, we will introduce and demo our upcoming product Constellation, which uses confidential VMs o create "fully confidential" Kubernetes deployments, in which all of Kubernetes runs inside confidential environments. Constellation is an evolution of MarbleRun that strikes a different balance between ease-of-use and TCB size.

Moritz Eckert is a cloud security enthusiast. With a past in software security research he now leads product development at Edgeless Systems. Moritz is a passionate engineer and has presented at top-tier conferences including Usenix Security Symposium, EuropeClouds Summit, and OC3 in the past. Alongside his professional work, Moritz is part of Shellphish, one of the highest-ranked competitive hacking groups in the world.
Privacy is an important aspect of AI applications. We combine Trusted Execution Environments, a library OS, and a scalable service mesh for confidential computing to achieve these security guarantees for Tensorflow-based inference and training with minimal performance and porting overheads.
Description
Access to data is a crucial requirement for the development of advanced driver-assistance systems (ADAS) based on Artificial Intelligence (AI). However, security threats, strict privacy regulations, and potential loss of Intellectual Property (IP) ownership when collaborating with partners can turn data into a toxic asset (Schneier, 2016): Data leaks can result in huge fines and in damage to brand reputation. An increasingly diverse regulatory landscape imposes significant costs on global companies. Finally, ADAS development requires close collaboration across original equipment manufacturers (OEMs) and suppliers. Protecting IP in such settings is both necessary and challenging.
Privacy-Enhancing Technologies (PETs) can alleviate all these problems by increasing control over data. In this paper, we demonstrate how Trusted Execution Environments (TEEs) can be used to lower the aforementioned risks related to data toxicity in AI pipelines used for ADAS development. Contributions
The three most critical success factors for applying PETs in the automotive domain are low overhead in terms of performance and efficiency, ease of adoption, and the ability to scale. ADAS development projects are major efforts generating infrastructure costs in the order of tens to hundreds of millions. Hence, even moderate efficiency overheads translate into significant cost overhead. Before the advent of Intel 3rd Generation XEON Scalable Processors (Ice Lake), the overhead of SGX protected CPU-based training of a TensorFlow model was up to 3-fold when compared to training on the same CPU without using SGX. In a co-engineering effort, Bosch Research and Intel have been able to effectively eliminate these overheads.
In addition, ADAS development happens on complex infrastructures designed to meet highest demands in terms of storage space and compute power. Major changes to these systems for implementing advanced security measures would be prohibitive in terms of time and effort. We demonstrate that Gramine’s (Tsai, Porter, & Vij, 2017) Lift and Shift approach keeps the effort for porting existing workloads to SGX minimal. Finally, being able to process millions of video sequences consisting of billions of frames in short development cycles necessitates a scalable infrastructure. By using the MarbleRun (Edgeless Systems GmbH, 2021) confidential service mesh, Kubernetes can be transformed into a substrate for confidential computing at scale.
To demonstrate the validity of our approach, Edgeless Systems and Bosch Research jointly implemented a proof-of-concept implementation of an exemplary ADAS pipeline using SGX, MarbleRun and Gramine as part of the Open Bosch venture client program.

Stefan Gehrer is a Research Engineer with Robert Bosch Research in Pittsburgh, Pennsylvania. His current research interests are in Trusted Execution Environments, Intrusion Detection Systems, and AI Security. In the past he also worked on Deep Learning-based Side-Channel-Attacks, Physically Unclonable Functions, and Automotive Safety&Security with publications at top-tier conferences. Stefan has a PhD in Hardware Security from the Technical University of Munich.

Scott Raynor is the lead Security Solutions Architect within the Security Software and Services group at Intel. Scott has worked at Intel for over 25 years in various roles including CPU and platform validation, OS kernel and driver development, BIOS architect and development, and currently in a customer facing role working to enable customers to successfully develop and bring their security based products to market. In particular, products based on Intel® Software Guard Extensions (Intel® SGX).

Moritz Eckert is a cloud security enthusiast. With a past in software security research he now leads product development at Edgeless Systems. Moritz is a passionate engineer and has presented at top-tier conferences including Usenix Security Symposium, EuropeClouds Summit, and OC3 in the past. Alongside his professional work, Moritz is part of Shellphish, one of the highest-ranked competitive hacking groups in the world.
Support for AMD Secure Nested Paging (SNP) for Linux is under heavy development. There is work ongoing to make Linux run as an SNP guest and to host SNP protected virtual machines. I will explain the key concepts of SNP and talk about the ongoing work and the directions being considered to enable SNP support in the Linux kernel and the higher software layers. I will also talk about proposed attestation workflows and their implementation.

Jörg is a Linux Kernel engineer highly involved in the Confidential Computing work at SUSE. He enabled Linux to run as an AMD SEV-ES guest and helps with the enablement of Secure Nested Paging. He maintains the IOMMU subsystem of the Linux kernel and is also involved in the KVM, X86, and PCI subsystems.
Regulated institutions have strong business reasons to invest in confidential computing. As with any new technology, governance takes center stage. This talk explores the vast landscape of considerations involved in provably and securely operationalizing Confidential Computing in the public cloud.
Download the accompanying paper here.
Description
Heavily regulated institutions have a strong interest in strengthening protections around data entrusted to public clouds. Confidential Computing is an area that will be of great interest in this context. Securing data in use raises a significantly larger number of questions around proving the effectiveness of new security guarantees -- significantly more than either securing data-in-transit or data-at-rest.
Curiously, this topic has so far received no attention in CCC, or IETF, or anywhere else that we're aware of.
This talk will propose a taxonomy of confidential computing governance and break the problem space down into several constituent domains, with requirements listed for each. Supply chain and toolchain considerations, controls matrices, control plane governance, attestation and several other topics will be discussed.

Mark Novak is a researcher in JPMorgan's Future Lab for Applied Research and Engineering (FLARE) focusing on Confidential Computing for the Financial Technology sector. Prior to joining FLARE, Mark spent three years in JPMorgan's Cybersecurity, Technology and Controls organization and before that - was an architect for confidential computing services and fleet health at Microsoft Azure.
Binary attestation allows a remote machine (e.g., a server) to attest that it is running a particular binary. However, usually, the other party (e.g., a client) is interested in guarantees about properties of the binary. We present a release process that allows checking claims about the binaries.
Description
Project Oak provides a trusted runtime and a generic remote attestation protocol for a server to prove its identity and trustworthiness to its clients. To do this, the server, running inside a trusted execution environment (TEE), sends TEE-provided measurements to the client. These measurements include the cryptographic hash of the server binary signed by the TEE’s key. This is called binary attestation.
However, the cryptographic hash of the binary is not sufficient for making any guarantees about the security and trustworthiness of the binary. What is really desired is semantic remote attestation that allows attestation to the properties of a binary. However these approaches are expensive, as they require running checks (e.g., a test suite) during the attestation handshake.
We propose a release process to fill in this gap by adding transparency to binary attestation. For transparency, the release process publishes all released binaries in a public and externally maintained verifiable log. Once an entry has been added to the log, it can never be removed or changed. So a client, or any other interested party (e.g., a trusted external verifier or auditor), can find the binary in the verifiable log. Finding the binary in the verifiable log is important for the client as it gives the client the possibility to detect, with higher likelihood, if it is interacting with a malicious server. Having a public verifiable log is important as it supports public scrutiny of the binaries.
In addition, we are implementing an ecosystem to provide provenance claims about released binaries. We use SLSA provenance predicates for specifying provenance claims. Every entry in the verifiable log corresponding to a released binary contains a provenance claim, cryptographically signed by the team or organization releasing the binary. The provenance claim specifies the source code and the toolchain for building the binary from source. The provenance details allow reproducing server binaries from the source, and verifying (or more accurately falsifying) security claims about the binaries by inspecting the source, its dependencies, and the build toolchain.

Razieh is a senior software engineer at Google, currently working on Project Oak. In this role, her focus is on developing building blocks for more secure and privacy-preserving software systems. She has a PhD in Software Engineering from University of Oslo. Prior to Google, she has been involved in various industrial and research projects aimed at developing more effective techniques for verification and validation of safety-critical systems, software product families, and information-intensive systems.
OSS Project Veraison builds software components that can be used to create Attestation Verification services required to establish that a CC environment is trustworthy. These flexible & extensible components can be used to address multiple Attestation technologies and deployment options.
Description
Establishing that a Confidential Computing environment is trustworthy requires the process of Attestation. Verifying the evidential claims in an attestation report can be a complex process, requiring knowledge of token formats and access to a source of reference data that may only be available from a manufacturing supply chain.
Project Veraison (VERificAtIon of atteStatiON) addresses these complexities by building software components that can be used to create Attestation Verification services.
This session discusses the requirements to determine that an environment is trustworthy, the mechanisms of attestation and how Project Veraison brings consistency to the problems of appraising technology specific attestation reports and connecting to the manufacturing supply chain where the reference values of what is 'good' reside.

Simon Frost is a Software Architect in the Architecture and Technology Group at Arm. He runs a software prototyping team responsible for building components that will assist using technologies based on Arm architectures in various environments. Areas of interest include attestation and firmware provenance.

Thomas Fossati is an engineer in the Architecture and Technology Group at Arm where he deals with attestation in various capacities and is the tech lead for Project Veraison. He is the Arm representative on the Confidential Computing Consortium TAC and a co-chair of the CCC Attestation SIG.
How can we make Confidential Computing accessible, so that developers from all levels can quickly learn and use this technology? In this session, we welcome three Outreachy interns, who had zero knowledge of Confidential Computing, to showcase what they've developed in just a few months.
Description
Implementing state-of-the-art Confidential Computing is complex, right? Developers must understand how Trusted Execution Environments work (whether they are process-based or VM-based), be familiar with the different platforms that support Confidential Computing (such as Intel's SGX or AMD's SEV), and have knowledge of complex concepts such as encryption and attestation.
Enarx, an open source project part of the Confidential Computing Consortium, abstracts all these complexities and makes it really easy for developers from all levels to implement and deploy applications to Trusted Execution Environments.
The Enarx project partnered with Outreachy, a diversity initiative from the Software Freedom Conservancy, to welcome three interns, who had zero knowledge of Confidential Computing. During just a few of months, they learned the basics and started building demos in their favorite language, from simple to more complex.
In this session, they'll have the opportunity to showcase their demos and share what they've learned. Our hope is to demonstrate that Confidential Computing can be made accessible and easy to use by all developers.

Nick Vidal is the Community Manager of Profian and the Enarx project, which is part of the Confidential Computing Consortium from the Linux Foundation. Previously, he was the Director of Community and Business Development at the Open Source Initiative, Director of Americas at the Open Invention Network, and one of the community leaders of the Drupal project in Latin America.
Confidential Computing requires trust relationships. What are they, how can you establish them, and what are the possible pitfalls? Our focus will be cloud deployments, but we will look at other environments such as telecom and Edge.
Description
Deploying Confidential Computing workloads is only useful if you can be sure what assurances you have about trust. This requires establishing relationships with various entities, and sometimes rejecting certain entities as appropriate for trust. Examples of someof the possible entities include:
- hardware vendors
- CSPs
- workload vendors
- open source communities
- independent software vendors (ISVs)
- attestation providers
This talk will address how and why trust relationships can be established, the dangers of circular relationships, some of the mechanisms for evaluating them, and what they allow when (and if!) they are set up. It describes the foundations for considering when Confidential Computing makes sense, and when you should mistrust the claims of some of those offering it!

Mike Bursell is CEO of Profian, a company in the Confidential Computing space. He is one of the co-founders of the Enarx project (https://enarx.dev) and a visible presence in the Confidential Computing Consortium. Mike has previously worked at companies including Red Hat, Intel and Citrix, with roles working on security, virtualisation and networking. After training in software engineering, he specialised in distributed systems and security. He regularly speaks at industry events in Europe, North America and APAC.
Professional interests include: Confidential Computing, Linux, trust, open source software, security, distributed systems, blockchain, virtualisation.
Mike has an MA from the University of Cambridge and an MBA from the Open University, and is author of "Trust in Computer Systems and the Cloud", published by Wiley.
As confidential VMs become a reality, trusted components within the guest such as guest firmware become increasingly relevant for trust and security posture of VM. In this talk, we will focus on our explorations in building “customer managed guest firmware” for increased control and auditability of CVM’s TCB.
Description
Confidential computing developers like flexibility and control over guest TCB because that allows managing what components make up the trusted code base. In a VM these requirements are tricky to meet. In this talk you will learn how in Azure we are enabling new capabilities to help you make a full VM as a Trusted Execution Environment and help your app perform remote attestation with another trusted party in a Linux VM environment with OSS guest firmware options.

Pushkar is principal architect in Azure Confidential Computing in Microsoft. Pushkar worked in Microsoft Research for 9 years focused on systems and protocols for search and compression before focusing his efforts on practical confidential computing. Since then, he joined azure to lay the foundations for Azure Confidential Computing with charter to take cloud confidential computing to the masses.

Ragavan is a Senior Software Engineer working in Azure Confidential Computing, Microsoft. He has 16 years of software development experience in building system software and antimalware systems.
In this talk, we present Mystikos project’s progress on Python programing language support and a ML workflow in a cloud environment that preserve the confidentiality of the ML model and the privacy of the inference data even if the cloud provider is not trusted. In addition, we provide demo showing how to protect the data using secret keys stored with Azure Managed HSM, and how to retrieve the keys from MHSM at run time using attestation, and how to use the keys for decryption. We also demonstrate how an application could add the secret provisioning capability with simple configurations.
Description
Confidential ML involves many stakeholders: the owner of the input data, the owner of the inference model, and the owner of the inference results, etc. Porting ML workload to Confidential Computing and managing keys and their retrieval into the Confidential Computing ML application securely and confidentially are challenging for users who have limited understanding of Confidential Computing confidentiality and security. We provide a solution implementing the heavy lifting in Mystikos runtime: the programming language runtime, the attestation, the encryption/decryption, the key provisioning etc., so that users only have to convert their python based ML applications and config their applications with a few lines of JSON code. While the demo takes advantage of Secure Key Unwrap capability of Azure Managed HSM, the solution is based on an open framework that can be extended to other key vault providers.

Xuejun Yang is a developer for Azure Confidential Computing. Xuejun tries to help developers to build confidential applications in their favorite languages, Python, C#, or C/C++, while minimizing the hassle of attestation and secret provisioning. Xuejun has worked on a wide spectrum of systems software ranging from compilers and language runtimes including CRT to libos. His work on random testing of C compilers leads to best paper award of PLDI and influenced the random testing of various system software.
Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Intel SGX to protect the integrity and confidentiality of Smart Contracts. This talk is a FPC 101 and will showcase the benefits of Confidential Computing in the blockchain space.
Description
Fabric Private Chaincode (FPC) is a new security feature for Hyperledger Fabric that leverages Confidential Computing Technology to protect the integrity and confidentiality of Smart Contracts.
In this talk we will learn what Fabric Private Chaincode is and how it can be used to implement privacy-sensitives use cases for Hyperledger Fabric. Our goal of this talk is to educate developers and architects with all necessary background and first hands-on experience to adopt FPC for their projects.
We start with an introduction of FPC, explaining the basic FPC architecture, security properties, and hardware requirements. We will cover the FPC Chaincode API and the applications integration using the FPC Client SDK.
The highlight of this talk will be a showcase of a new language support feature for Fabric Private Chaincode using the EGo open-source SDK.

Marcus is a researcher at the IBM Research Lab in Zurich, Switzerland working in the area of Blockchain Security and Applications. His research is focused on secure distributed systems using confidential computing. Marcus holds a PhD in computer science from TU Braunschweig, Germany. He is an active contributor to the Hyperledger Community, particularly, he writes code for the Fabric Private Chaincode project, and he has spoken at various conferences such as the Hyperledger Global Forum 2021 and 2020, SRDS 2019, and DSN 2017.
Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.
Description
This session will dive into how secure ledgers provide security and integrity to customers in compliance and auditing related scenarios. Specifically, customers who must maintain a source of truth which remains tamper protected, from everyone. We will also discuss how secure ledgers benefit from confidential computing and open-source.

Shubhra leads product management for secure ledger services that utilize confidential computing at Microsoft. She is passionate about solving end user problems, learning about new technological concepts and domains, and diving deep into data to surface insights. Her past experiences include building enterprise and consumer products.
We all understand that data sovereignty in highly regulated industries like government, healthcare, and fintech is critical, prohibiting even the most basic data insights because it cannot be moved to a centralized location for collaboration or model training. Confidential computing powered by Intel Software Guard Extensions (Intel SGX) changes all of that. Join us to learn how customers across every industry are gaining insights never before possible.

Laura Martinez directs marketing strategy for cybersecurity at Intel Corporation. Laura has spoken on a wide variety of topics including artificial intelligence, analytics, as well as IT security spanning healthcare, banking and transportation. Laura has been a key contributor to Intel’s participation in Cyberweek and RSA, focusing efforts to translate customer security needs into everyday language.
Laura spent the first part of her career in IT security at Trend Micro, where she managed Premium Support Services before moving into program & product management. After moving into product management, she saw a gap in the market and proposed a new security solution that was sold in the consumer market. In her tenure at Trend Micro, she found that there was a growing need for security in the healthcare market, and joined UC Davis Medical Center, where she managed the areas of IT Communications and Analytics before joining Intel Corp in their security marketing division.
Storing payment data in your e-commerce site may expose your business to challenges for PCI compliance. Azure confidential computing provides a platform for protecting your customer’s financial information at scale.

Stefano Tempesta works at Microsoft in the Azure Confidential Computing product group to make the Cloud a more secure place for your data and apps.
Balancing data privacy, runtime protection with ease and nimbleness of deployments is reality for the current state of confidential computing.
Simplicity of PODs and availability of orchestration for confidential computing, exploring the adoption of Kata POD isolation with protected virtualisation.Secure ledger technology is enabling customers who have a need for maintaining a source of truth where even the operator is outside the trusted computing base. Top examples: recordkeeping for compliance purposes, and enable trusted data.
Description
We are discussing the use of Kata POD isolation with protected virtualisation. Striving for confidential computing with a cloud native model while preserving most of the K8S compliance. This talk will summarise the state of the technical discussion in the industry, discuss solutions and open questions and give a hint into the future of confidential computing with cloud native models.Speed of adoption of confidential computing will to a large extend depend on the ease of use for developers and administrators in incorporating runtime protection into the established technology stack. From UseCases to technology demo the technology team is moving forward.

Stefan Liesche is the Architect for IBM Hybrid Cloud on Z. Stefan is focused on security, transparency, protection of data and services and confidential computing technology in flexible cloud environments. He has developed a broad spectrum of technology areas for IBM, including IBM Cloud Hyper protect Services and IBMs Watson Talent Portfolio where Stefan created AI driven solutions that transform recruiting and career decisions to enhance fairness and tackle biases. Stefan also innovated within the Exceptional User Experience products for several years with a focus on open solutions and integration. Stefan has more than 20 years of experience as global technical leader collaborating with partners and customers through joint initiatives.

James works within the IBM Hyper Protect family of offerings which deliver Confidential Computing to the Cloud using IBM LinuxONE and IBM Z Systems technology. He has responsibility for the technical architecture to leverage IBM Secure Execution for Linux capability (Trusted Execution Environment) in Cloud Native solutions. James has an MA from the University of Cambridge University and over 20 years experience solving customer problems with emerging technologies, contributing to and using open source projects as part of the solution. He is an active contributor to the open source Confidential Containers project which is looking to enable Cloud Native Confidential Computing by leveraging Trusted Execution Environments to protect containers and data.