We're not competing with Gemini or OpenAI or the big cloud providers. For instance, Google is partnering with NVIDIA to ship Gemini on-prem to regulated industries in a CC environment to protect their model weights as well as for additional data privacy on-prem: https://blogs.nvidia.com/blog/google-cloud-next-agentic-ai-r...
We're simply trying to bring similar capabilities to other companies. Inference is just our first product.
>cloud provider can SSH into the VM
The point we were making was that CC was traditionally used to remove trust from cloud providers, but not the application provider. We are further removing trust from ourselves (as the application provider), and we can enable our customers (who could be other startups or neoclouds) to remove trust from themselves and prove that to their customers.
You are providing the illusion of trust though.
There are a multitude of components between my app and your service. You have secured one of them arguably the least important. But you can't provide any guarantees over say your API server that my requests are going through. Or your networking stack which someone e.g. a government could MITM.
I don't know anything about "secure enclaves" but I assume that this part is sorted out. It should be possible to use http with it I imagine. If not, yeah it is totally dumb from a conceptual standpoint.