mirkodrummer 5 days ago

So the analysis is done with another call to claude with instructions like "You are a cybersecurity expert..." basically another level of extreme indirection with unpredictable results, and maybe vulnerable to injection itself

1
nick_wolf 5 days ago

It's definitely a weird loop, relying on another LLM call to analyze potential issues in stuff meant for an LLM. And you're right, it's not perfectly predictable – you might get slightly different feedback run-to-run until careful prompt engineering, that's just the nature of current models. That's why the pattern-matching checks run firs, they're the deterministic baseline. The Claude analysis adds a layer that's inherently fuzzier, trying to catch subtler semantic tricks or things the patterns miss.

And yeah, the analysis prompt itself – could someone craft a tool description that injects that prompt when it gets sent to Claude? Probably. It's turtles all the way down, sometimes. That meta-level injection is a whole other can of worms with these systems. It's part of why that analysis piece is optional and needs the explicit API key. Definitely adds another layer to worry about, for sure.