If so, the loss of fidelity versus 4.5 is really noticeable and a loss for numerous applications. (Finding a vegan restaurant in a random city neighborhood, for example.)
In your example the LLM should not be responsible for that directly. It should be calling out to an API or search results to get accurate and up-to-date information (relatively speaking) and then use that context to generate a response
You should actually try it. The really big models (4 and 4.5, sadly not 4o) have truly breathtaking ability to dig up hidden gems that have a really low profile on the internet. The recommendations also seem to cut through all the SEO and review manipulation and deliver quality recommendations. It really all can be in one massive model.