Continuum AI is now public. Try out the most secure GenAI service!
Continuum
Continuum gives you access to state-of-the-art large language models (LLMs) with complete data privacy, secured by end-to-end confidential computing.
Continuum is a GenAI service like ChatGPT. The big difference is that Continuum keeps your data private. To achieve this, Continuum uses a technology called confidential computing. Your data gets encrypted before it leaves your device and remains protected throughout, even during processing.
Within its secure environment, Continuum runs Meta Llama 3.1 or other state-of-the-art LLMs, for example from Mistral.
The Continuum service provides an intuitive API that can be used as a drop-in replacement for OpenAI. The API allows for automated, bulk processing of sensitive data. In addition, there's a browser-based version with a familiar chatbot interface.
Continuum provides an OpenAI-compatible API. You only need to run a small proxy for data encryption and attestation. Alternatively, an SDK is available.
Continuum is fast and offers a selection of state-of-the-art LLMs open source LLMs, for example, Meta Llama 3.1.
Continuum uses a simple pricing model that is based on the number of input and output tokens. Tokens are roughly equivalent to words.
Continuum is hosted in the EU, the US, and soon other geographies. We use Azure and other high-quality infrastructure providers.
Reduce costs
Use secure cloud-based AI instead of building out your own capabilities on-prem.
Unlock potential
Process even your organization's sensitive data with the help of AI.
Increase productivity
Provide your employees with a trustworthy and compliant co-pilot.
Assure customers
Protect your customers' data while providing state-of-the-art AI-based services.
In Continuum, prompts and responses are fully protected from external access. Prompts are encrypted client-side using AES-256 and decrypted only within Continuum's confidential computing environment (CCE), enforced by Intel and AMD CPUs and Nvidia H100 GPUs. Data remains encrypted at runtime within the CCE, ensuring it never appears as plaintext in main memory.
The CPUs and GPUs enforcing Continuum's confidential-computing environment issue cryptographic certificates for all software running inside. With these, the integrity of the entire Continuum backend can be verified. Verification is performed on the user side via the Continuum proxy or SDK before sharing any data.
Continuum is architected such that user data can neither be accessed by the infrastructure provider (for example, Azure), nor the service provider (Edgeless Systems), nor other parties such as the provider of the AI model (for example, Meta). While confidential-computing mechanisms prevent outside-in access, sandboxing mechanisms and end-to-end remote attestation prevent inside-out leaks.
Sep. 30, 2024
Securing AI With Confidential Computing
"This architecture allows the Continuum service to lock itself out of the confidential computing environment, preventing AI code from leaking data. In combination with end-to-end remote attestation, this ensures robust protection for user prompts."
Sep. 24, 2024
General Availability: Azure confidential VMs with NVIDIA H100 Tensor Core GPUs
Jul. 2, 2024
Advancing Security for Large Language Models with NVIDIA GPUs and Edgeless Systems
"Edgeless Systems introduced Continuum AI, the first generative AI framework that keeps prompts encrypted at all times with confidential computing by combining confidential VMs with NVIDIA H100 GPUs and secure sandboxing."
Do you have questions or remarks around Continuum? Leave your details and we'll get back to you shortly.
The form failed to load. Please send an email to contact@edgeless.systems. Loading likely fails because you are using privacy settings or ad blocks.