> You could do that. But then you need to explain to the LLM how to do the work every time you want to use that tool
This reeks of a fundamental misunderstanding of computers and LLMs. We have a way to get a description of APIs over http, it's called an open API spec. Just like how MCP retrieves it's tool specs over MCP.
Why would an llm not be able to download an openai spec + key and put it into the context like MCP does with its custom schema?
> Why would an llm not be able to download an openai spec + key and put it into the context like MCP does with its custom schema?
NIH syndrome, probably.