Why so much hand-wringing? If you are an anti-AI developer and you are able to develop better code faster than someone using AI, good for you. If AI-using developers will end up ruining their codebase in months like many here are saying, then things will take care of themselves.
I see two main problems with this approach:
1. productivity and quality is hard to measure
2. the codebase they are ruining is the same one I am working on.
> 2. the codebase they are ruining is the same one I am working on.
We're supposed to have a process for dealing with this already, because developers can ruin a codebase without ai.
See point 1.
I don't understand. Presumably you have code reviews to stop coders committing rubbish to the repo?
Faster is not a smart metric to judge a programmer by.
"more code faster" is not a good thing, it has never been a good thing
I'm not worried about pro AI workers ruining their codebases at their jobs
I'm worried about pro AI coworkers ruining my job by shitting up the codebases I have to work in
I said "better code faster". Delivering features to users is always a good thing, and in fact is the entire point of what we do.
> in fact is the entire point of what we do
Pump the brakes there
You may have bought into some PMs idea of what we do, but I'm not buying it
As professional, employed software developers, the entire point of what we do is to provide value to our employers.
That isn't always by delivering features to users, it's certainly not always by delivering features faster
Even if you say "better faster" tens times fast, the quality of being produced fast and being broadly good are very different. Speed of development can be measured immediately. Quality is holistic. It's a product of not just formatting clear structures but of relating to the rest of a given system.
Most of the times I get to the real solution for a problem after working in the wrong one for a while. If/when LLM help me finish the wrong one faster it is not helpful and could even be damaging in a situation that it goes to production fast.
A lot of modern software dev is focused on delivering features to shareholders, not users. Doing that faster is going to make my life, as a user, worse.
I've posted recently about a dichotomy which I have had in my head for years as a technical person: there are two kinds of tools; the first lets you do the right thing more easily and the second lets you do the wrong thing more quickly and for longer before you have to pay for it. AI/LLMs can definitely be the latter kind of tool, especially in a context where short term incentives swamp long term ones.
I'm actually pro-AI and I use AI assistants for coding, but I'm also very concerned that the way those things will be deployed at scale in practice is likely to lead to severe degradation of software quality across the board.
Why the hand-wringing? Well, for one thing, as a developer I still have to work on that code, fix the bugs in it, maintain it etc. You could say that this is a positive since AI slop would provide for endless job security for people who know how to clean up after it - and it's true, it does, but it's a very tedious and boring job.
But I'm not just a developer, either - I'm also a user, and thinking about how low the average software quality already is today, the prospect of it getting even worse across the board is very unpleasant.
And as for things taking care of themselves, I don't think they will. So long as companies can still ship something, it's "good enough", and cost-cutting will justify everything else. That's just how our economy works these days.
This assumes a level of both rationality and omniscience that don't exist in the real world.
If a company fails to compete in the market and dies, there is no "autopsy" that goes in and realizes that it failed because of a chain-reaction of factors stemming from bad AI-slop code. And execs are so far removed from the code level, they don't know either, and their next company will do the same thing.
What you're likely to end up with is project managers and developers who do know the AI code sucks, and they'll be heeded by execs just as much they are now, which is to say not at all.
And when the bad AI-code-using devs apply to the next business whose execs are pro-AI because they're clueless, guess who they'll hire?