Is there more to MCP than being a simple Remote Procedure Call framework that allows AI interactions to include function calls driven by the AI model? The various documentation pages are a bit hand wavy on what the protocol actually is. But it sounds to me that RPC describes all/most of it.
The biggest contribution is the LLM compatible metadata that describes the tool and its argument. It is trivial to adopt. In python you can use FASTMcp to add a decorator to a function, and as long as that function returns a JSON string you are in business. The decorator extracts the arguments and doc strings and presents that to the LLM.
What makes a spec LLM compatible? I've thrown a lot of different things at gpt o1 and it generally understands them more better than I do. OpenAI specifications, unstructured text, log output, etc.