This is probably naive and looking forward to a correction; isn't sending your info to Claude's API (or really any "AI API") is a violation of your safeguarded privacy data?
Only if you don't believe the AI vendors when they promise that they won't train on your data.
(Or you don't trust them not to have security breaches that grant attackers access to logged data, which remains a genuine thread, albeit one that's true of any other cloud service.)
I have an AI/bridge to sell you.
Believing vendors who tell you "we won't train on your data" is a huge competitive advantage right now.
Using AWS Bedrock is the choice I've seen made to eliminate this problem.
How does bedrock eliminate this problem?
You aren't sending your data to Anthropic- no one has access to what you send except you. If you use private link, it doesn't even leave your vpc.
You could always run your own server locally if you have a decent gpu. Some of the smaller LLMs are getting pretty good.
Also M-series Macs have an insane price/performance/electricity consumption ratio in LLM use-cases.
Any M-series Mac Mini can run a pretty good local model with usable speed. The high-end models easily compete with dedicated GPUs.
Correct. My dusty Intel Nuc is able to run a decent 3B model(thanks to ollama) with fans spinning but does not affect any other running applications. It ks very useful for local hobby projects. Visible lags and freezes begin if I start a 5B+ model locally.