Simple: if the choice is getting overwhelming to the LLM, then... divide and conquer - add a tool for choosing tools! Can be as simple as another LLM call, with prompt (ugh, "agent") tasked strictly with selecting a subset of available tools that seem most useful for the task at hand, and returning that to "parent"/"main" "agent".
You kept adding more tools and now the tool-master "agent" is overwhelmed by the amount of choice? Simple! Add more "agents" to organize the tools into categories; you can do that up front and stuff the categorization into a database and now it's a rag. Er, RAG module to select tools.
There are so many ways to do it. Using cheaper models for selection to reduce costs, dynamic classification, prioritizing tools already successfully applied in previous chat rounds (and more "agents" to evaluate if a tool application was successful)...
Point being: just keep adding extra layers of indirection, and you'll be fine.
The problem is that even just having the tools in the context can greatly change the output of the model. So there can be utility in the agent seeing contextually relevant tools (RAG as you mentioned, etc. is better than nothing) and a negative utility in hiding all of them behind a "get_tools" request.