mhast 7 days ago

You could do that. But then you need to explain to the LLM how to do the work every time you want to use that tool.

And you also run into the risk that the LLM will randomly fail to use the tool "correctly" every time you want to invoke it. (Either because you forgot to add some information or because the API is a bit non-standard.)

All of this extra explaining and duplication is also going to waste tokens in the context and cost you extra money and time since you need to start over every time.

MCP just wraps all of this into a bundle to make it more efficient for the LLM to use. (It also makes it easier to share these tools with other people.)

Or if you prefer it. Consider that the first time you use a new API you can give these instructions to the LLM and have it use your API. Then you tell it "make me an MCP implementation of this" and then you can reuse it easily in the future.

1
Xelynega 6 days ago

> You could do that. But then you need to explain to the LLM how to do the work every time you want to use that tool

This reeks of a fundamental misunderstanding of computers and LLMs. We have a way to get a description of APIs over http, it's called an open API spec. Just like how MCP retrieves it's tool specs over MCP.

Why would an llm not be able to download an openai spec + key and put it into the context like MCP does with its custom schema?

otabdeveloper4 6 days ago

> Why would an llm not be able to download an openai spec + key and put it into the context like MCP does with its custom schema?

NIH syndrome, probably.