satisfice 8 days ago

LLMs have not earned your trust. Classic search has.

The only thing I really care about with classic web search is whether the resulting website is relevant to my needs. On this point I am satisfied nearly all the time. It’s easy to verify.

With LLMs I get a narrative. It is much harder to evaluate a narrative, and errors are more insidious. When I have carefully checked an LLM result, I usually discover errors.

Are you really looking closely at the results you get?

1
JKCalhoun 8 days ago

Your experience and mine are polar opposite. We use search differently is the only way I can reconcile that.

satisfice 6 hours ago

Yes. I am concerned about getting a correct answer. For this I want to see websites and evaluate them. This takes less energy than evaluating each sentence of an LLM response.

Often my searches take me to Wikipedia, Stack Overflow, or Reddit, anyway. But with LLMs I get a layer of hallucination on TOP of whatever misinformation is on the websites. Why put yourself through that?

I periodically ask ChatGPT about myself. This time I did get the best answer so far. Thus it is improving. It made two mistakes, but one of them comes directly from Wikipedia, so it's not a hallucination, although a better source of information was available than Wikipedia. As for the other one, it said that I made "contributions" to a process that I actually created.