Yeah. That's another in a long line of MCP articles and blogposts that's been coming up over the past few weeks, that can be summarized as "breaking news: this knife is sharp and can cut someone if you swing it at people, it can cut you if you hold it the wrong way, and is not a toy suitable for small children".
Well, yes. A knife cuts things, it's literally its only job. It will cut whatever you swing it at, including people and things you didn't intend to - that's the nature of a general-purpose cutting tool, as opposed to e.g. safety razor or plastic scissors for small children, which are much safer, but can only cut few very specific things.
Now, I get it, young developers don't know that knives and remote access to code execution on a local system are both sharp tools and need to be kept out of reach of small children. But it's one thing to remind people that the tool needs to be handled with care; it's another to blame it on the tool design.
Prompt injection is a consequence of the nature of LLMs, you can't eliminate it without degrading capabilities of the model. No, "in-band signaling" isn't the problem - "control vs. data" separation is not a thing in nature, it's designed into systems, and what makes LLMs useful and general is that they don't have it. Much like people, by the way. Remote MCPs as a Service are a bad idea, but that's not the fault of the protocol - it's the problem of giving power to third parties you don't trust. And so on.
There is technical and process security to be added, but that's mostly around MCP, not in it.
Well. To repurpose you knife analogy, they (we?) duct-taped a knife on an erratic, PRNG-controlled roomba and now discover that people are getting their Achilles tendons sliced. Technically, it's all functioning exactly as intended, but: this knife was designed specifically to be attached to such roombas, and apparently nobody stopped to think whether it was such a great idea.
And admonishments of "don't use it when people are around, but if you do, it's those people's fault when they get cut: they should've be more careful and probably wore some protective foot-gear" while technically accurate, miss the bigger problem. That is, that somebody decided to strap a sharp knife to a roomba and then let it whiz around in the space full of people.
Mind you, we have actual woodcutting table saws with built-in safety measures: they instantly stop when they detect contact with human skin. So you absolutely can have safe knives. They just cost more, and I understand that most people value (other) people's health and lives quite cheaply indeed, and so don't bother buying/designing/or even considering such frivolities.
This is a total tangent, but we can't have 100% safe knives because one of the uses for a knife is to cut meat. (Sawstop the company famously uses hot dogs to simulate human fingers in their demos.)
Yes. Also, equally important is the fact that table saws are not knives. The versatility of a knife was the whole point of using it as an example.
--
EDIT: also no, your comment isn't a tangent - it's exactly on point, and a perfect illustration of why knives are a great analogy. A knife in its archetypal form is at the highest point of its generality as a tool. A cutting surface attached to a handle. There is nothing you could change in this that would improve it without making it less versatile. In particular, there is no change you could make that would make a knife safer without making it less general (adding a handle to the blade was the last such change).
No, you[0] can't add a Sawstop-like system to it, because as you[1] point out, it works by detecting meat - specifically, by detecting the blade coming in contact with something more conductive than wood. Such "safer" knife thus can't be made from non-conductive materials (e.g. ceramics), and it can't be used to work with fresh food, fresh wood, in humid conditions, etc.[2]. You've just turned a general-purpose tool into a highly specialized one - but we already have a better version of this, it's the table saw!
Same pattern will apply to any other idea of redesigning knives to make them safer. Add a blade cage of some sort? Been done, plenty of that around your kitchen, none of it will be useful in a workshop. Make knife retractable and add a biometric lock? Now you can't easily share the knife with someone else[3], and you've introduced so many operational problems it isn't even funny.
And so on, and so on; you might think that with enough sensors and a sufficiently smart AI, a perfectly safe knife could be made - but then, that's also exist, it's called you the person who is wielding the knife.
To end this essay my original witty comment has now become, I'll spell it out: like a knife, LLMs are by design general-purpose tools. You can make them increasingly safer by sacrificing some aspects of their functionality. You cannot keep them fully general and make them strictly safer, because the meaning of "safety" is itself highly situational. If you feel the tool is too dangerous for your use case, then don't use it. Use a table saw for cutting wood, use a safety razor for shaving, use a command line and your brain for dealing with untrusted third-party software - or don't, but then don't go around blaming the knife or the LLM when you hurt yourself by choosing to use too powerful a tool for the job at hand. Take responsibility, or stick to Fisher-Price alternatives.
Yes, this is a long-winded way of saying: what's wrong with MCP is that a bunch of companies are now trying to convince you to use it in a dangerous way. Don't. Your carelessness is your loss, but their win. LLMs + local code execution + untrusted third parties don't mix (neither do they mix if you remove "LLMs", but that's another thing people still fail to grasp).
As for solutions to make systems involving LLMs safer and more secure - again, look at how society handles knives, or how we secure organizations in general. The measures are built around the versatile-but-unsafe parts, and they look less technical, and more legal.
(This is to say: one of the major measures we need to introduce is to treat attempts at fooling LLMs the same way as fooling people - up to and including criminalizing them in some scenarios.)
--
[0] - The "generic you".
[1] - 'dharmab
[2] - And then if you use it to cut through wet stuff, the scaled-down protection systems will likely break your wrist; so much for safety.
[3] - Which could easily become a lethal problem in an emergency, or in combat.
The problem with the “knife is sharp” argument is that it’s too generic. It can be deployed against most safety improvements. The modern world is built on driving accident rates down to near-zero. That’s why we have specialized tools like safety razors. Figuring out what to do to reduce accident rates is what postmortems are for - we don’t just blame human error, we try to fix things systematically.
As usual, the question is what counts as a reasonable safety improvement, and to do that we would need to go into the details.
I’m wondering what you think of the CaMeL proposal?
https://simonwillison.net/2025/Apr/11/camel/#atom-everything