it seems that this free version "may use your prompts and completions to train new models"
https://openrouter.ai/deepseek/deepseek-chat-v3-0324:free
do you think this needs attention?
That's typical of the free options on OpenRouter, if you don't want your inputs used for training you use the paid one: https://openrouter.ai/deepseek/deepseek-chat-v3-0324
Is OpenRouter planning on distilling models off the prompts and responses from frontier models? That's smart - a little gross - but smart.
COO of OpenRouter here. We are simply stating the WE can’t vouch for the behavior of the upstream provider’s retention and training policy. We don’t save your prompt data, regardless of the model you use, unless you explicitly opt-in to logging (in exchange for a 1% inference discount).
That 1% discount feels a bit cheap to me - if it was a 25% or 50% discount I would be much more likely to sign up for it.
We don’t particularly want our customers’ data :)
Yeah, but Openrouter has a 5% surcharge anyway.
Since we are on HN here, I can highly recommend open-webui with some OpenAI-compatible provider. I'm running with Deep Infra for more than a year now and am very happy. New models are usually available within one or two days after release. Also have some friends who use the service almost daily.
I too run openweb-ui locally and use deepinfra.com as my backend. It has been working very well, and I am quite happy with deepinfra's pricing and privacy policy.
I have set up the same thing at work for my colleagues, and they find it better than openai for their tasks.
Yeah, openweb-ui is the best frontend for API queries. Everything seems to work well.
I've tried LibreChat before, but the app is terrible at generating titles for chats instead of leaving it as "New Chat". Also it lacks a working Code Interpreter.
I'm using open-webui at home with a couple of different models. gemma2-9b fits in VRAM on a NV 3060 card + performs nicely.
And it’s quite easy to set up a Cloudflare tunnel to make your open-webui instance accessible online too just you
... or a TailScale network. I've been leaving open-webui running on my laptop on my desk and then going out into the word and accessing it from my phone via TailScale, works great.
Yeah this sounds like the more secure option, you don't want to be dependent on a single flaw in a web service
Yeah OpenWebUI is great with local models too. I love it. You can even do a combo, send the same prompt to local and cloud and even various providers and compare the results.
I've tried using it, but it's browser tab seems to peg one core to 100% after some time. Anyone else experienced it?
Can open-webui update code on your local computer ala cursor etc?
It has a module system so maybe it can but it seems more people are using Aider or Continue for that. There's a bit of stitching things together regardless of whether you show your project to some SaaS or run local models but if you can manage a Linux system it'll be easy.
Personally I heavily dislike the experience though, so I might not be the best one to answer.
Thats because its a 3rd party API someone is hosting and trying to arb the infra cost or mine training data, or maybe something even more sinister. I stay away from open router API's that aren't served by reputable well known companies, and even then...
good grief! people are okay with it when OpenAI and Google do it, but as soon as open source providers do it, people get defensive about it...
I trust big companies far more with my data than small ones.
Big companies have so much data they won't be having a human look at mine specifically. Some small place probably has the engineer looking at my logs as user #4.
Also, big companies have security teams whose job is securing the data, and it won't be going over some unencrypted link to cloudflare because OP was too lazy to set up Https certs.
Equifax.