As an experienced software engineer I'm bothered about pretty much everything about how we develop things on top of LLMs! I can't even figure out how to write automated tests for them.
See also my multi-year obsession with prompt injection and LLM security, which still isn't close to being a solved problem: https://simonwillison.net/tags/prompt-injection/
Yet somehow I can't tear myself away from them. The fact that we can use computers to mostly understand human language (and vision problems as well) is irresistible to me.
This is exactly why I follow your work, this mix of critical thinking and enthusiasm. Please keep going!
> The fact that we can use computers to mostly understand human language
I agree that'd be amazing if they do that, but they most certainly do not. I think this is the core my disagreement here that you believe this and let this guide you. They don't understand anything and are matching and synthesizing patterns. I can see how that's enthralling like watching a rube goldberg machine go through its paces, but there is no there there. The idea that there is an emergent something there is at best an unproven theory, is documented as being an illusion, and at worst has become an unfounded messianic belief.
That's why I said "mostly".
I know they're just statistical models, and that having conversations with them is like having a conversation with a stack of dice.
But if the simulation is good enough to be useful, the fact that they don't genuinely "understand" doesn't really matter to me.
I've had tens of thousands of "conversations" with these things now (I know because I log them all). Whether or not they understand anything they're still providing a ton of value back to me.
I guess I respect that you're stating it honestly, but this is a statement of belief or faith. I think it is something that you should disclose perhaps more often because it doesn't stem from other first principles and is I guess actually just tautological. I guess this is also getting more precise with our fundamental disagreement, I guess I just wouldn't blog about things that are beliefs as if they are the technology itself?
I don't need belief or faith to get use and entertainment out of the transformers. As Simon said, good enough.
You put it so well! I agree wholeheartedly. Llms are language toys we get to play and it's so much fun. But I'm bothered in the same way you are and that's fine.