Your experience and mine are polar opposite. We use search differently is the only way I can reconcile that.
Yes. I am concerned about getting a correct answer. For this I want to see websites and evaluate them. This takes less energy than evaluating each sentence of an LLM response.
Often my searches take me to Wikipedia, Stack Overflow, or Reddit, anyway. But with LLMs I get a layer of hallucination on TOP of whatever misinformation is on the websites. Why put yourself through that?
I periodically ask ChatGPT about myself. This time I did get the best answer so far. Thus it is improving. It made two mistakes, but one of them comes directly from Wikipedia, so it's not a hallucination, although a better source of information was available than Wikipedia. As for the other one, it said that I made "contributions" to a process that I actually created.