stunnAR 6 days ago

This is probably naive and looking forward to a correction; isn't sending your info to Claude's API (or really any "AI API") is a violation of your safeguarded privacy data?

3
simonw 5 days ago

Only if you don't believe the AI vendors when they promise that they won't train on your data.

(Or you don't trust them not to have security breaches that grant attackers access to logged data, which remains a genuine thread, albeit one that's true of any other cloud service.)

ForOldHack 5 days ago

I have an AI/bridge to sell you.

simonw 5 days ago

Believing vendors who tell you "we won't train on your data" is a huge competitive advantage right now.

jasonjmcghee 6 days ago

Using AWS Bedrock is the choice I've seen made to eliminate this problem.

Everdred2dx 5 days ago

How does bedrock eliminate this problem?

jasonjmcghee 5 days ago

You aren't sending your data to Anthropic- no one has access to what you send except you. If you use private link, it doesn't even leave your vpc.

redman25 5 days ago

You could always run your own server locally if you have a decent gpu. Some of the smaller LLMs are getting pretty good.

theshrike79 4 days ago

Also M-series Macs have an insane price/performance/electricity consumption ratio in LLM use-cases.

Any M-series Mac Mini can run a pretty good local model with usable speed. The high-end models easily compete with dedicated GPUs.

n_ary 5 days ago

Correct. My dusty Intel Nuc is able to run a decent 3B model(thanks to ollama) with fans spinning but does not affect any other running applications. It ks very useful for local hobby projects. Visible lags and freezes begin if I start a 5B+ model locally.

stunnAR 3 days ago

Yes - of course. That's been my experience with "ultimate" privacy.