dharmab 7 days ago

This is a total tangent, but we can't have 100% safe knives because one of the uses for a knife is to cut meat. (Sawstop the company famously uses hot dogs to simulate human fingers in their demos.)

1
TeMPOraL 6 days ago

Yes. Also, equally important is the fact that table saws are not knives. The versatility of a knife was the whole point of using it as an example.

--

EDIT: also no, your comment isn't a tangent - it's exactly on point, and a perfect illustration of why knives are a great analogy. A knife in its archetypal form is at the highest point of its generality as a tool. A cutting surface attached to a handle. There is nothing you could change in this that would improve it without making it less versatile. In particular, there is no change you could make that would make a knife safer without making it less general (adding a handle to the blade was the last such change).

No, you[0] can't add a Sawstop-like system to it, because as you[1] point out, it works by detecting meat - specifically, by detecting the blade coming in contact with something more conductive than wood. Such "safer" knife thus can't be made from non-conductive materials (e.g. ceramics), and it can't be used to work with fresh food, fresh wood, in humid conditions, etc.[2]. You've just turned a general-purpose tool into a highly specialized one - but we already have a better version of this, it's the table saw!

Same pattern will apply to any other idea of redesigning knives to make them safer. Add a blade cage of some sort? Been done, plenty of that around your kitchen, none of it will be useful in a workshop. Make knife retractable and add a biometric lock? Now you can't easily share the knife with someone else[3], and you've introduced so many operational problems it isn't even funny.

And so on, and so on; you might think that with enough sensors and a sufficiently smart AI, a perfectly safe knife could be made - but then, that's also exist, it's called you the person who is wielding the knife.

To end this essay my original witty comment has now become, I'll spell it out: like a knife, LLMs are by design general-purpose tools. You can make them increasingly safer by sacrificing some aspects of their functionality. You cannot keep them fully general and make them strictly safer, because the meaning of "safety" is itself highly situational. If you feel the tool is too dangerous for your use case, then don't use it. Use a table saw for cutting wood, use a safety razor for shaving, use a command line and your brain for dealing with untrusted third-party software - or don't, but then don't go around blaming the knife or the LLM when you hurt yourself by choosing to use too powerful a tool for the job at hand. Take responsibility, or stick to Fisher-Price alternatives.

Yes, this is a long-winded way of saying: what's wrong with MCP is that a bunch of companies are now trying to convince you to use it in a dangerous way. Don't. Your carelessness is your loss, but their win. LLMs + local code execution + untrusted third parties don't mix (neither do they mix if you remove "LLMs", but that's another thing people still fail to grasp).

As for solutions to make systems involving LLMs safer and more secure - again, look at how society handles knives, or how we secure organizations in general. The measures are built around the versatile-but-unsafe parts, and they look less technical, and more legal.

(This is to say: one of the major measures we need to introduce is to treat attempts at fooling LLMs the same way as fooling people - up to and including criminalizing them in some scenarios.)

--

[0] - The "generic you".

[1] - 'dharmab

[2] - And then if you use it to cut through wet stuff, the scaled-down protection systems will likely break your wrist; so much for safety.

[3] - Which could easily become a lethal problem in an emergency, or in combat.