It's always true that you need to drop down a level of abstraction in order to extract the ultimate performance. (eg I wrote a decent-sized game + engine entirely in C about 10 years ago and played with SIMD vectors to optimise the render loop)
However, I think the vast majority of use cases will not require this level of control, and we will abandon prompts once the tools improve.
Langchain and DSPY are also not there for me either - I think the whole idea of prompting + evals needs a rethink.
(full disclaimer: I'm working on such a tool right now!)
i'd be interested to check it out
here's a take, I adapted this from someone on the notebookLM team on swyx's podcast
> the only way to build really impressive experiences in AI, is to find something right at the edge of the model's capability, and to get it right consistently.
So in order to build something very good / better than the rest, you will always benefit from being able to bring in every optimization you can.
I think the building blocks of the most impressive experiences will come from choosing the exact right point to involve an LLM, the orchestration of the component pieces, and the user experience.
That's certainly what I found in games. The games which felt magic to play were never the ones with the best hand rolled engine.
The tools aren't there yet to ignore prompts, and you'll always need to drop down to raw prompting sometimes. I'm looking forward to a future where wrangling prompts is only needed for 1% of my system.
yeah. the issue is when you're baked into a tool stack/framework where you cant go customize in that 1% of cases. A lot of tools try to get the right abstractions where you can "customize everything you would want to" but they miss the mark in some cases