killthebuddha 3 days ago

I see a good number of comments that seem skeptical or confused about what's going on here or what the value is.

One thing that some people may not realize is that right now there's a MASSIVE amount of effort duplication around developing something that could maybe end up looking like MCP. Everyone building an LLM agent (or pseudo-agent, or whatever) right now is writing a bunch of boilerplate for mapping between message formats, tool specification formats, prompt templating, etc.

Now, having said that, I do feel a little bit like there's a few mistakes being made by Anthropic here. The big one to me is that it seems like they've set the scope too big. For example, why are they shipping standalone clients and servers rather than client/server libraries for all the existing and wildly popular ways to fetch and serve HTTP? When I've seen similar mistakes made (e.g. by LangChain), I assume they're targeting brand new developers who don't realize that they just want to make some HTTP calls.

Another thing that I think adds to the confusion is that, while the boilerplate-ish stuff I mentioned above is annoying, what's REALLY annoying and actually hard is generating a series of contexts using variations of similar prompts in response to errors/anomalies/features detected in generated text. IMO this is how I define "prompt engineering" and it's the actual hard problem we have to solve. By naming the protocol the Model Context Protocol, I assumed they were solving prompt engineering problems (maybe by standardizing common prompting techniques like ReAct, CoT, etc).

3
thelastparadise 3 days ago

Your point about boilerplate is key, and it’s why I think MCP could work well despite some of the concerns raised. Right now, so many of us are writing redundant integrations or reinventing the same abstractions for tool usage and context management. Even if the first iteration of MCP feels broad or clunky, standardizing this layer could massively reduce friction over time.

Regarding the standalone servers, I suspect they’re aiming for usability over elegance in the short term. It’s a classic trade-off: get the protocol in people’s hands to build momentum, then refine the developer experience later.

jappgar 2 days ago

I don't see I or any other developer would abandon their homebrew agent implementation for a "standard" which isn't actually a standard yet.

I also don't see any of that implementation as "boilerplate". Yes there's a lot of similar code being written right now but that's healthy co-evolution. If you have a look at the codebases for Langchain and other LLM toolkits you will realize that it's a smarter bet to just roll your own for now.

You've definitely identified the main hurdle facing LLM integration right now and it most definitely isn't a lack of standards. The issue is that the quality of raw LLM responses falls apart in pretty embarrassing ways. It's understood by now that better prompts cannot solve these problems. You need other error-checking systems as part of your pipeline.

The AI companies are interested in solving these problems but they're unable to. Probably because their business model works best if their system is just marginally better than their competitor.

ineedaj0b 3 days ago

data security is the reason i'd imagine they're letting other's host servers

killthebuddha 3 days ago

The issue isn’t with who’s hosting, it’s that their SDKs don’t clearly integrate with existing HTTP servers regardless of who’s hosting them. I mean integrate at the source level, of course they could integrate via HTTP call.