Skip to main content

Confidential AI

Confidential AI is the concept of employing confidential computing technology for verifiable protection of data throughout the AI lifecycle, including when the data and models are in use. This presents a lot of different "sub-use cases", such as:

  • Secure outsourcing of AI workloads: companies are now able to delegate AI workloads to an infrastructure without needing to trust it. This is the case for most financial or public institutions, that deal with extremely sensitive data. Since it's very expensive to get large amounts of AI accelerators, this is the perfect solution.
  • IP protection for AI models: this is particularly crucial when deploying a proprietary AI model to a customer's site or integrating it into a third-party offering. With confidential AI, the model can be deployed to allow invocation without the risk of copying or alterations. For example, this feature enables secure on-prem deployments of the ChatGPT model.
  • Privacy-preserving AI training and inference: Confidential computing establishes "black box" systems ensuring verifiable privacy for data sources. In this process, software X is designed to keep input data private and runs in a confidential-computing environment. Data sources use remote attestation to verify the correct instance of X, assuring them that their data remains private. In the same way, one can create a software X that trains an AI model on data from multiple sources and verifiably keeps that data private. By adopting this approach, there's an incentive for individuals and companies to share sensitive data, aiding in compliance efforts. Consider a scenario where the data includes personally identifiable information (PII), requiring anonymization before training---a process that might compromise data quality.