levifig 16 hours ago

FWIW, llama.cpp links to and fetches models from ollama (https://github.com/ggml-org/llama.cpp/blob/master/tools/run/...).

This issue seems to be the typical case of someone being bothered for someone else, because it implies there's no "recognition of source material" when there's quite a bit of symbiosis between the projects.

4
diggan 15 hours ago

Well, llama.cpp supports fetching models from a bunch of different sources according to that file, Hugging Face, ModelScope, Ollama, any HTTP/local source. Seems fair to say they've added support for any source one most likely will find the LLM model you're looking for at.

Not sure I'd say there is "symbiosis" between ModelScope and llama.cpp just because you could download models from there via llama.cpp, just like you wouldn't say there is symbiosis between LM Studio and Hugging Face, or even more fun example: YouTube <> youtube-dl/yt-dlp.

gopher_space 12 hours ago

Symbiosis states that a relationship exists. Subcategories of symbiosis state how useful that relationship is to either party, and they're determined by the observer rather than the organisms involved.

ActionHank 16 hours ago

Yes and no, the problem with not expecting that a prominent project follow the rules is that it makes it easier and more likely that no one will follow the rules.

Broken window theory.

cwmoore 8 hours ago

Police broke my window. No theory needed.

int_19h 13 hours ago

The fact that Ollama has been downplaying their reliance on llama.cpp has been known in the local LLM community for a long time now. Describing the situation as "symbiosis" is very misleading IMO.

moralestapia 14 hours ago

>FWIW

It's not worth much. That is a compeltely different thing.

What you mention equates to downloading a file from the web.

Ollama using code from llama.cpp without complying with the license terms is illegal.