I get that it's frustrating to be told "skill issue," but using an LLM is absolutely a skill and there's a combination of understanding the strengths of various tools, experimenting with them to understand the techniques, and just pure practice.
I think if I were giving access to bash, though, it would definitely be in a docker container for me as well.
Sure, you can probably get better at it, but is it really worth the effort over just getting better at programming?
If you are going to race a fighter jet, and you are on a bicycle, exercising more and eating right will not help. You have to use a better tool.
A good programmer with AI tools will run circles around a good programmer without AI tools.
To be fair, that's also what a lot of us used to say about IDEs. In reality, plenty of folks just turned vim into a fighter jet and did just as well without super-heavyweight llms.
I'm not totally convinced that we won't see a similar effect here, with some really competitive coders 100% eschewing LLMs and still doing as well as the best that use them.
> In reality, plenty of folks just turned vim into a fighter jet and did just as well without super-heavyweight llms.
No, they didn't.
You can get vim and Emacs on par with IDEs[0] somewhat easily thanks to Language Server Protocol. You can't turn them into "fighter jets" without "super-heavyweight LLMs" because that's literally what, per GP, makes an editor/IDE a fighter jet. Yes, Emacs has packages for LLM integration, and presumably so does Vim, but the whole "fighter jet vs. bicycle" is entirely about SOTA LLMs being involved or not.
--
[0] - On par wrt. project-level features IDEs excel at; both editors of course have other aspects that none of the IDEs ever come close to.
Honestly, that is a really fair counterpoint. I've been playing with neovim lately and it really feels a lot like some of the earlier IDEs that I used to use but with more modern power and tremendous speed.
Maybe we will all use LLMs one day in neovim too. :)
What does that even mean? How do you even quantify that?
Got any evidence on that or is it just “vibes”? I have my doubts that AI tools are helping good programmers much at all, forget about “running circles” around others.
I don't know about "running circles" but they seem to help with mundane/repetitive tasks. As in, LLMs provide greater than zero benefit, even to experienced programmers.
My success ratio still isn't very high, but for certain easy tasks, I'll let an LLM take a crack at it.
Citation needed for your second sentence. This is the problem with AI hype cycles. Lots of outstanding claims, a lot less actual evidence supporting those claims. Lot of anecdotes though. Maybe the LLMs are in a loop recursively promoting themselves for that sweet venture funding.
Studies take time. https://www.microsoft.com/en-us/research/wp-content/uploads/... is the first one from Microsoft. But it goes back to gains coming as people become more skilled.
Yes, not because you will be able to solve harder problems, but because you will be able to more quickly solve easier problems which will free up more time to get better at programming, as well as get better at the domain in which you're programming. (That is, talking with your users.)
Except the skill involved is believing in random people's advice that a different model will surely be better with no fundamental reason or justification as to why. The benchmarks are not applicable when trying to apply the models to new work and benchmarks by there nature do not describe suitability to any particular problem.