The thing most LLM maximalists don't realize is that the bottleneck for most people is not code generation, it's code understanding. You may have doubled the speed at which you created something, but you need to pay double that time back in code review, testing and building a mental model of the codebase in your head. And you _need_ to do this if you want to have any chance of maintaining the codebase (i.e. bugfixes, refactoring etc.)
Totally agree! Reading code is harder than writing it, and I think I spend more time reading and trying to understand than I do writing.
But this CEO I just met on LinkedIn?
"we already have the possibility to both improve our productivity and increase our joy. To do that we have to look at what software engineering is. That might be harder than it looks because the opportunity was hidden in plain sight for decades. It starts with rethinking how we make decisions and with eliminating the need for reading code by creating and employing contextual tools."
Context is how AI is a whole new layer of complexity that SWE teams have to maintain.
I'm so confused.
I have often had the same thought in response to the effusive praise some people have for their sophisticated, automated code editors.
This is not true.
It may be bad practice, but consider that the median developer does not care at all about the internals of the dependencies that they are using.
They care about the interface and about whether they work or not.
They usually do not care about the implementation.
Code generated by LLM is not that different than pulling in a random npm package or rust crate. We all understand the downsides, but there is a reason that practice is so popular.
Popular packages are regularly being used and vetted by thousands of engineers and that level of usage generally leads to subtle bugs being found and fixed. Blindly copy/pasting some LLM code is the opposite of that. It might be regurgitating some well developed code, but it's at least as likely to be generating something that looks right but is completely wrong in some way.
"Code generated by LLM is not that different than pulling in a random npm package or rust crate"
So I really hope you don't pull in packages randomly. That sounds like a security risk.
Also, good packages tend have a team of people maintaining it. How is that the same exactly?
> So I really hope you don't pull in packages randomly. That sounds like a security risk.
It absolutely is, but that is besides the point
> Also, good packages tend have a team of people maintaining it. How is that the same exactly?
The famously do not https://xkcd.com/2347/
If you're a developer, you do yourself a disservice by describing it this way.
> They usually do not care about the implementation.
[citation needed]
> Code generated by LLM is not that different than pulling in a random npm package or rust crate
It's not random, there's an algorithm for picking "good" packages and it's much simpler than reviewing every single line of LLM code.
>> They usually do not care about the implementation. > [citation needed]
Everybody agrees that e.g. `make` and autotools is a pile of garbage. It doesn't matter, it works and people use it.
> It's not random, there's an algorithm for picking "good" packages and it's much simpler than reviewing every single line of LLM code.
But you don't need to review every single line of LLM code just as you don't need to review every single line of dependency code. If it works, it works.
Why does it matter who wrote it?
Everything compounds. Good architecture makes it easy to maintain things later. Bad code will slow you down to a snail pace and will result in 1000s of bug tickets.
If you as a developer care so much about stuff that the software users won't care about, you should look for better tools.
> Code generated by LLM is not that different than pulling in a random npm package or rust crate.
Yes, LLM code is significantly worse than even a random package as it very often doesn't even compile.