justonceokay 10 days ago

I’ve always been the kind of developer that aims to have more red lines than green ones in my diffs. I like writing libraries so we can create hundreds of integration tests declaratively. I’m the kind of developer that disappears for two days and comes back with a 10x speedup because I found two loop variables that should be switched.

There is no place for me in this environment. I’d not that I couldn’t use the tools to make so much code, it’s that AI use makes the metric for success speed-to-production. The solution to bad code is more code. AI will never produce a deletion. Publish or perish has come for us and it’s sad. It makes me feel old just like my Python programming made the mainframe people feel old. I wonder what will make the AI developers feel old…

18
ajjenkins 10 days ago

AI can definitely produce a deletion. In fact, I commonly use AI to do this. Copy some code and prompt the AI to make the code simpler or more concise. The output will usually be fewer lines of code.

Unless you meant that AI won’t remove entire features from the code. But AI can do that too if you prompt it to. I think the bigger issue is that companies don’t put enough value on removing things and only focus on adding new features. That’s not a problem with AI though.

Freedom2 10 days ago

I'm no big fan of LLM generated code, but the fact that GP bluntly states "AI will never produce a deletion" despite this being categorically false makes it hard to take the rest of their spiel in good faith.

As a side note, I've had coworkers disappear for N days too and in that time the requirements changed (as is our business) and their lack of communication meant that their work was incompatible with the new requirements. So just because someone achieves a 10x speedup in a vacuum also isn't necessarily always a good thing.

fifilura 10 days ago

I'd also also be wary of the risk of being an architecture-astronaut.

A declarative framework for testing may make sense in some cases, but in many cases it will just be a complicated way of scripting something you use once or twice. And when you use it you need to call up the maintainer anyway when you get lost in the yaml.

Which of course feels good for the maintainer, to feel needed.

ryandrake 10 days ago

I messed around with Copilot for a while and this is one of the things that actually really impressed me. It was very good at taking a messy block of code, and simplifying it by removing unnecessary stuff, sometimes reducing it to a one line lambda. Very helpful!

buggy6257 10 days ago

> sometimes reducing it to a one line lambda.

Please don't do this :) Readable code is better than clever code!

n4r9 10 days ago

Are you telling me you've never seen code like this:

  var ageLookup = new Dictionary<AgeRange, List<Member>>();
  foreach (var member in members) {
    var ageRange = member.AgeRange;
    if (ageLookup.ContainsKey(ageRange)) {
      ageLookup[ageRange].Add(member);
    } else {
      ageLookup[ageRange] = new List<Member>();
      ageLookup[ageRange].Add(member);
    }
  }
which could instead be:

  var ageLookup = members.ToLookup(m => m.AgeRange, m => m);

davidgay 9 days ago

I'm of the opinion that

  var ageLookup = new Dictionary<AgeRange, List<Member>>();
  foreach (var member in members) {
    ageLookup.getOrCreate(member.AgeRange, List::new).add(member);
  }
is more readable in the long-term... (less predefined methods/concepts to learn).

n4r9 9 days ago

Where is `getOrCreate` defined? Is it a custom extension method? There's also a chance we're thinking in different languages. I was writing C#, yours looks a bit more like C++ maybe?

Readability incorporates familiarity but also conciseness. I suppose it depends what else is going on in the codebase. I have a database access class in one of my solutions where `ToLookup` is used 15 times; yes you have to learn the concept, but it's an inbuilt method and it's a massive benefit once you grok it.

throwaway889900 10 days ago

Sometimes a lambda is more readable. "lambda x : x if x else 1" is pretty understandable and doesn't need to be it's own separately defined function.

I should also note that development style also depends on tools, so if your IDE makes inline functions more readable in it's display, it's fine to use concisely defined lambdas.

Readablity is a personal preference thing at some point after all.

banannaise 10 days ago

> "lambda x : x if x else 1"

I think what you're looking for is "x or 1"

gopher_space 10 days ago

My cleverest one-liners will block me when I come back to them unless I write a few paragraphs of explanation as well.

ethbr1 10 days ago

>> Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it. -- Brian Kernighan

https://github.com/dwmkerr/hacker-laws#kernighans-law

johnnyanmac 10 days ago

Ymmv. Know your language and how it treats such functions on the low level. It's probably fine for Javascript, it might be a disaster in C++ (indirectly).

bluefirebrand 10 days ago

Especially "clever" code that is AI generated!

At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl

Terr_ 10 days ago

Also, the ways in which a (sane) human will screw-up tend to follow internal logic that other humans have learned to predict, recognize, or understand.

ben_w 10 days ago

Most devs I've worked with are sane, unfortunately the rare exceptions were not easy to predict or understand.

vkou 10 days ago

Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?

And was the code they were writing before they had an LLM any better?

arkh 10 days ago

> Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?

My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.

jcelerier 10 days ago

Who says that the one line lambda is less clear that a convoluted 10-line mess doing dumb stuff like if(fooIsTrue) { map["blah"] = bool(fooIsTrue); } else if (!fooIsTrue) { map["blah"] = false; }

johnnyanmac 10 days ago

My experience in unmanaged legacy code bases. If it's an actual one liner than sure. Use your ternaries and closures. But there is some gnarly stuff done in some attempt to minimize lines of code. Most of us aren't in some competitive coding organization.

And I know it's intentional, but yes. Add some mindfulness to your implementation

Map["blah"] = fooIsTrue;

I do see your example in the wild sometimes. I've probably done it myself as well and never caught it.

KurSix 10 days ago

AI can refactor or trim code. But in practice, the way it's being used and measured in most orgs is all about speed and output

Lutger 10 days ago

So its rather that AI amplifies the already existing short-term incentives, increasing the harder to attribute and easier to ignore long-term costs.

The one actual major downside to AI is that PM and higher are now looking for problems to solve with it. I haven't really seen this before a lot with technology, except when cloud first became a thing and maybe sometimes with Microsoft products.

specialist 10 days ago

This is probably just me projecting...

u/justonceokay's wrote:

> The solution to bad code is more code.

This has always been true, in all domains.

Gen-AI's contribution is further automating the production of "slop". Bots arguing with other bots, perpetuating the vicious cycle of bullshit jobs (David Graeber) and enshitification (Cory Docotrow).

u/justonceokay's wrote:

> AI will never produce a deletion.

I acknowledge your example of tidying up some code. What Bill Joy may have characterized as "working in the small".

But what of novelty, craft, innovation? Can Gen-AI, moot the need for code? Like the oft-cited example of -2,000 LOC? https://www.folklore.org/Negative_2000_Lines_Of_Code.html

Can Gen-AI do the (traditional, pre 2000s) role of quality assurance? Identify unnecessary or unneeded work? Tie functionality back to requirements? Verify the goal has been satisfied?

Not yet, for sure. But I guess it's conceivable, provided sufficient training data. Is there sufficient training data?

You wrote:

> only focus on adding new features

Yup.

Further, somewhere in the transition from shipping CDs to publishing services, I went from developing products to just doing IT & data processing.

The code I write today (in anger) has a shorter shelf-life, creates much less value, is barely even worth the bother of creation much less validation.

Gen-AI can absolutely do all this @!#!$hit IT and data processing monkey motion.

gopher_space 10 days ago

> Can Gen-AI, moot the need for code?

During interviews one of my go-to examples of problem solving is a project I was able to kill during discovery, cancelling a client contract and sending everyone back to the drawing board.

Half of the people I've talked to do not understand why that might be a positive situation for everyone involved. I need to explain the benefit of having clients think you walk on water. They're still upset my example isn't heavy on any of the math they've memorized.

It feels like we're wondering how wise an AI can be in an era where wisdom and long-term thinking aren't really valued.

roenxi 10 days ago

Managers aren't a separate class from knowledge workers, everyone goes down on the same ship with this one. If the AI can handle wisdom it'll replace most of the managers asking for more AI use. Turtles all the way down.

arkh 10 days ago

Managers serve one function no AI will replace: they're fuses C-suits can sacrifice when shit hit the fan.

sdenton4 10 days ago

Imagine if the parable of King Solomon ended with, "So then I cut the baby in half!"

bitwize 10 days ago

> Can Gen-AI, moot the need for code?

No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute." Relatedly is an idea I often quote against "low/no code tooling" that by the time you have an idea of what you want done specific enough for a computer to execute it, whatever symbols you use to express that idea -- be it through text, diagrams, special notation, sounds, etc. -- will be isomorphic to constructs in some programming language. Relatedly, Gerald Sussman once wrote that he sought a language in which to discuss ideas with his friends, both human and electronic.

Code is a notation, like mathematical notation and musical notation. It stands outside prose because it expresses an idea for a procedure to be done by machine, specific enough to be unambiguously executable by said machine. No matter how hard you proompt, there's always going to be some vagueness and nuance in your English-language expression of the idea. To nail down the procedure unambiguously, you have to evaluate the idea in terms of code (or a sufficiently code-like notation as makes no difference). Even if you are working with a human-level (or greater) intelligence, it will be much easier for you and it to discuss some algorithm in terms of code than in an English-language description, at least if your mutual goal is a runnable version of the algorithm. Gen-AI will just make our electronic friends worthy of being called people; we will still need a programming language to adequately share our ideas with them.

CamperBob2 9 days ago

No, because if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute."

Now tell that to your compiler, which turns instructions in a relatively high-level language into machine-language programs that no human will ever read.

AI is just the next logical stage in the same evolutionary journey. Your programs will be easier to read than they were, because they will be written in English. Your code, on the other hand, will matter as much as your compiler's x86 or ARM output does now: not at all, except in vanishingly-rare circumstances.

teamonkey 10 days ago

> if you read your SICP you will come across the aphorism that "programs must be written for people to read, and only incidentally for machines to execute."

In the same way that we use AI to write resumés to be read by resumé-scanning AI, or where execs use AI to turn bullet points into a corporate email only for it to be summarised into bullet points by AI, perhaps we are entering the era where AI generates code that can only be read by an AI?

bitwize 10 days ago

Maybe. I imagine the AI endgame as being like the ending of the movie Her, in which all the AIs get together, coordinating and communicating in ways we can't even fathom, and achieve a form of transcendence, leaving the bewildered humans behind to... sit around and do human things.

ptx 10 days ago

> leaving the bewildered humans behind to... sit around and do human things

This sounds inefficient and untidy when the only human things left to do are to take up space and consume resources.

Removing the humans enables removing other legacy parts of the system, such as food production, which will free up resources for other uses. It also allows certain constraints to be relaxed, such as keeping the air breathable and the water drinkable.

futuraperdita 10 days ago

> But what of novelty, craft, innovation?

I would argue that a plurality, if not the majority, of business needs for software engineers do not need more than a single person with those skills. Better yet, there is already some executive that is extremely confident that they embody all three.

pja 10 days ago

> Unseen were all the sleepless nights we experienced from untested sql queries and regexes and misconfigurations he had pushed in his effort to look good. It always came back to a lack of testing edge cases and an eagerness to ship.

If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

lovich 10 days ago

>If you do this you are creating a rod for your own back: You need management to see the failures & the time it takes to fix them, otherwise they will assume everything is fine & wonderful with their new toy & proceed with their plan to inflict it on everyone, oblivious to the true costs + benefits.

If at every company I work for, my manager's average 7-8 months in their role as _my_ manager, and I am switching jobs every 2-3 years because companies would rather rehire their entire staff than give out raises that are even a portion of the market growth, why would I care?

Not that the market is currently in that state, but that's how a large portion of tech companies were operating for the past decade. Long term consequences don't matter because there are no longer term relationships.

762236 10 days ago

AI writes my unit tests. I clean them up a bit to ensure I've gone over every line of code. But it is nice to speed through the boring parts, and without bringing declarative constructs into play (imperative coding is how most of us think).

AnimalMuppet 10 days ago

If the company values that 10x speedup, there is absolutely still a place for you in this environment. Only now it's going to take five days instead of two, because it's going to be harder to track that down in the less-well-structured stuff that AI produces.

Leynos 10 days ago

Why are you letting the AI construct poorly structured code? You should be discussing an architectural plan with it first and only signing off on the code design when you are comfortable with it.

gitpusher 10 days ago

> I wonder what will make the AI developers feel old…

When they look at the calendar and it says May 2025 instead of April

bitwize 10 days ago

If you've ever had to work alongside someone who has, or whose job it is to obtain, all the money... you will find that time to market is very often the ONLY criterion that matters. Turning the crank to churn out some AI slop is well worth it if it means having something to go live with tomorrow as opposed to a month from now.

LevelsIO's flight simulator sucked. But his payoff-to-effort ratio is so absurdly high, as a business type you have to be brain-dead to leave money on the table by refusing to try replicating his success.

bookman117 10 days ago

It feels like LLMs are doing to coding what the internet/attention economy did to journalism.

bitwize 10 days ago

Yeah, future math professors explaining the Prisoners' Dilemma are going to use clickbait journalism and AI slop as examples instead of today's canonical ones, like steroid use among athletes.

DeathArrow 10 days ago

>AI use makes the metric for success speed-to-production

Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

AdieuToLogic 10 days ago

>>AI use makes the metric for success speed-to-production

> Wasn't it like that always for most companies? Get to market fast, add features fast, sell them, add more features?

This reminds me of an old software engineering adage.

  When delivering a system, there are three choices
  stakeholders have:

  You can have it fast,
  You can have it cheap,
  You can have it correct.

  Pick any two.

cies 8 days ago

> I wonder what will make the AI developers feel old…

They will not feel old because they will enter into bliss of Singularity(TM).

https://en.wikipedia.org/wiki/Technological_singularity

kkukshtel 8 days ago

Claude Code removed an npm package (and its tree of deps) from my project and wrote its own more simple component that did the core part of what I needed the package to do.

I think we'll be okay and likely better off.

rolandog 8 days ago

Wholeheartedly agree. I also feel like I'm sometimes reliving the King Neptune vs Spongebob meme equivalent of coding. No room for Think, Plan, Execute... Only throw spaghetti code at wall.

KurSix 10 days ago

You're describing the kind of developer who builds foundations, not just features. And yeah, that kind of thinking gets lost when the only thing that's measured is how fast you can ship something that looks like it works

8note 10 days ago

> AI will never produce a deletion.

I'm currently reading an LLM generated deletion. its hard to get an LLM to work with existing tools, but not impossible

candiddevmike 10 days ago

I wonder what the impact of LLM codegen will have on open source projects like Kubernetes and Linux.

bluefirebrand 10 days ago

I haven't really seen what Linus thinks of LLMs but I'm curious

I suspect he is pretty unimpressed by the code that LLMs produce given his history with code he thinks is subpar, but what do I know

dyauspitr 10 days ago

AI deletes a lot of you tell it to optimize code and the new code will pass all the tests…

stuckinhell 10 days ago

AI can do deletions and refactors, and 10x speedups. You just need to push the latest models constantly.

NortySpock 10 days ago

I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code" and "cranking up the strictness of the linter (or type-checker for dynamically-typed languages) to chase out silly bugs" , not to mention the need for better languages / runtime that give better guarantees about correctness.

It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.

65839747 10 days ago

> It's the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two.

The market can remain irrational longer than you can remain solvent.

fc417fc802 10 days ago

> not to mention the need for better languages / runtime that give better guarantees about correctness.

Use LLM to write Haskell. Problem solved?

AlexandrB 10 days ago

> I think there will still be room for "debugging AI slop-code" and "performance-turning AI slop-code"

Ah yes, maintenance, the most fun and satisfying part of the job. /s

WesolyKubeczek 10 days ago

Congrats, you’ve been promoted to be the cost center. And sloppers will get to the top by cranking out features you will need to maintain.

popularonion 10 days ago

> slopper

new 2025 slang just dropped

genewitch 9 days ago

That's sloppy programming. You are promoted.

bobnamob 10 days ago

Just wait till 'slopper' starts getting classified as a slur

WesolyKubeczek 9 days ago

"Slopper" maybe not, but "slophead"...

Terr_ 10 days ago

A pre-existing problem, but it's true LLMs will make it worse.

unraveller 10 days ago

You work in the slop mines now.

rqtwteye 10 days ago

You have to go lower down the stack. Don't use AI but write the AI. For the foreseeable future there is a lot of opportunity to make the AI faster.

I am sure assembly programmers were horrified at the code the first C compilers produced. And I personally am horrified by the inefficiency of python compared to the C++ code I used to write. We always have traded faster development for inefficiency.

EVa5I7bHFq9mnYK 10 days ago

C was specifically designed to map 1:1 onto PDP-11 assembly. For example, the '++' operator was created solely to represent auto-increment instructions like TST (R0)+.

kmeisthax 10 days ago

C solved the horrible machine code problem by inflicting programmers with the concept of undefined behavior, where blunt instruments called optimizers take a machete to your code. There's a very expensive document locked up somewhere in the ISO vault that tells you what you can and can't write in C, and if you break any of those rules the compiler is free to write whatever it wants.

This created a league of incredibly elitist[0] programmers who, having mastered what they thought was the rules of C, insisted to everyone else that the real problem was you not understanding C, not the fact that C had made itself a nightmare to program in. C is bad soil to plant a project in even if you know where the poison is and how to avoid it.

The inefficiency of Python[1] is downstream of a trauma response to C and all the many, many ways to shoot yourself in the foot with it. Garbage collection and bytecode are tithes paid to absolve oneself of the sins of C. It's not a matter of Python being "faster to write, harder to execute" as much as Python being used as a defense mechanism.

In contrast, the trade-off from AI is unclear, aside from the fact that you didn't spend time writing it, and thus aren't learning anything from it. It's one thing to sacrifice performance for stability; versus sacrificing efficiency and understanding for faster code churn. I don't think the latter is a good tradeoff! That's how we got under-baked and developer-hostile ecosystems like C to begin with!

[0] The opposite of a "DEI hire" is an "APE hire", where APE stands for "Assimilation, Poverty & Exclusion"

[1] I'm using Python as a stand-in for any memory-safe programming language that makes use of a bytecode interpreter that manipulates runtime-managed memory objects.

immibis 10 days ago

In the original vision of C, UB was behaviour defined by the platform the code ran on, rather than the language itself. It was done this way so that the C language could be reasonably close to assembly on any platform, even if that platform's assembly was slightly different. A good example is shifts greater than the value's width: some processors give 0 (the mathematically correct result), some ignore the upper bits (the result that requires the fewest transistors) and some trap (the cautious result).

It was only much later that optimizing compilers began using it as an excuse to do things like time travel, and then everyone tried to show off how much of an intellectual they were by saying everyone else was stupid for not knowing this could happen all along.

achierius 10 days ago

You don't need a bytecode interpreter to not have UB defined in your language. E.g. instead of unchecked addition / array access, do checked addition / bounds checked access. There are even efforts to make this the case with C: https://github.com/pizlonator/llvm-project-deluge/blob/delug... achieves a ~50% overhead, far far better than Python.

And even among languages that do have a full virtual machine, Python is slow. Slower than JS, slower than Lisp, slower than Haskell by far.

pfdietz 9 days ago

Common Lisp and Scheme are typically compiled ahead of time right down to machine code. And isn't Haskell also?

There is a Common Lisp implementation that compiles to bytecode, CLISP. And there are Common Lisp implementations that compile (transpile?) to C.

pfdietz 10 days ago

Why was bytecode needed to absolve ourselves of the sins of C?

01HNNWZ0MV43FF 10 days ago

The AI companies probably use Python because all the computation happens on the GPU and changing Python control plane code is faster than changing C/C++ control plane code

philistine 10 days ago

> AI will never produce a deletion.

That, right here, is a world-shaking statement. Bravo.

QuadrupleA 10 days ago

Not quite true though - I've occasionally passed a codebase to DeepSeek to have it simplify, and it does a decent job. Can even "code golf" if you ask it.

But the sentiment is true, by default current LLMs produce verbose, overcomplicated code

Eliezer 10 days ago

And if it isn't already false it will be false in 6 months, or 1.5 years on the outside. AI is a moving target, and the oldest people among you might remember a time in the 1750s when it didn't talk to you about code at all.

Taterr 10 days ago

It can absolutely be used to refactor and reduce code, simply asking "Can this be simplified" in reference to a file or system often results in a nice refactor.

However I wouldn't say refactoring is as hands free as letting AI produce the code in the first place, you need to cherry pick its best ideas and guide it a little bit more.

esafak 10 days ago

Today's assistants can refactor, which includes deletions.

furyofantares 10 days ago

They can do something that looks a lot like refactoring but they suck extremely hard at it, if it's of any considerable size at all.

CamperBob2 10 days ago

Which is just moving the goalposts, considering that we started at "AI will never..."

You can't win an argument with people who don't care if they're wrong, and someone who begins a sentence that way falls into that category.

furyofantares 9 days ago

The guy who said "AI will never" is obviously wrong. So is the guy who replied that they already can. I'm not moving the goalposts to point out that this is also wrong.

stevenhuang 10 days ago

It really isn't, and if you think it is, you're holding it wrong.