The ads angle is an interesting one since that's what motivates most things that Google and Meta do. Their LLMs' context window size has been growing, and while this might the natural general progression with LLMs, for those 2 ads businesses there's pretty straight paths to using their LLMs for even more targeted ads. For example, with the recent Llama "herd" releases, the LLMs have surprisingly large context window and one can imagine why Meta might want that: For stuffing in it as much of the personal content that they already have of their users. Then their LLMs can generate ads in the tone and style of the users and emotionally manipulate them to click on the link. Google's LLMs also have large context windows and such capability might be too tempting to ignore. Thinking this, there were moments that made me think that I was being to cynical, but I don't think they'll leave that kind of money on the table, an opportunity to reduce human ad writers headcount while improving click stats for higher profit.
EDIT: Some typo fixes, tho many remain, I'm sure :)
When LLMs are essentially trying to sell me something, the shit is over.
I like LLMs (over search engines) because they are not salespeople. They're one of the few things I actually "trust". (Which I know is something that many people fall on the other side of — but no, I actually trust them more than SEO'd web sites and ad-driven search engines.)
I suppose my local-LLM hobby is for just such a scenario. While it is a struggle, there is some joy in trying to host locally as powerful an open LLM model as your hardware will allow. And if the time comes when the models can no longer be trusted, pop back to the last reliable model on the local setup.
That's what I keep telling myself anyway.
LLMs have not earned your trust. Classic search has.
The only thing I really care about with classic web search is whether the resulting website is relevant to my needs. On this point I am satisfied nearly all the time. It’s easy to verify.
With LLMs I get a narrative. It is much harder to evaluate a narrative, and errors are more insidious. When I have carefully checked an LLM result, I usually discover errors.
Are you really looking closely at the results you get?
Your experience and mine are polar opposite. We use search differently is the only way I can reconcile that.
Yes. I am concerned about getting a correct answer. For this I want to see websites and evaluate them. This takes less energy than evaluating each sentence of an LLM response.
Often my searches take me to Wikipedia, Stack Overflow, or Reddit, anyway. But with LLMs I get a layer of hallucination on TOP of whatever misinformation is on the websites. Why put yourself through that?
I periodically ask ChatGPT about myself. This time I did get the best answer so far. Thus it is improving. It made two mistakes, but one of them comes directly from Wikipedia, so it's not a hallucination, although a better source of information was available than Wikipedia. As for the other one, it said that I made "contributions" to a process that I actually created.
The real threat to Google, Meta is that LLMs become so cheap that its trivial for a company like Apple to make them available for free and include all the latest links to good products. No more search required if each M chip powered device can give you up-to-date recommendations for any product/service query.
That is my fantasy, actually.
Meta's models cant be used by companies about a certain threshold, so nope. Apple can wait it out to use a 'free model', but at that point it'll be like picking up an open source database like Postgres - you wont get any competitive advantage.