It's not true that coding would no longer be fun because of AI. Arithmetic did not stop being fun because of calculators. Travel did not stop being fun because of cars and planes. Life did not stop being fun because of lack of old challenges.
New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
> For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
Well this sounds delightful! Glad to be free of the thinking and creativity!
When you’re churning out many times more code per unit time, you had better think good and hard about how to organize it.
Everyone wanted to be an architect. Well, here’s our chance!
I find legacy systems fun because you're looking at an artefact built over the years by people. I can get a lot of insight into how a system's design and requirements changed over time, by studying legacy code. All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model.
> "All of that will be lost, drowned in machine-generated slop, if next decade's legacy code comes out the backside of a language model."
The fun part though is that future coding LLMs will eventually be poisoned by ingesting past LLM generated slop code if unrestricted. The most valuable code bases to improve LLM quality in the future will be the ones written by humans with high quality coding skills that are not reliant or minimally reliant on LLMs, making the humans who write them more valuable.
Think about it: A new, even better programming language is created like Sapphire on Skates or whatever. How does a LLM know how to output high quality idiomatically correct code for that hot new language? The answer is that _it doesn't_. Not until 1) somebody writes good code for that language for the LLM to absorb and 2) in a large enough quantity for patterns to emerge that the LLM can reliably identify as idiomatic.
It'll be pretty much like the end of Asimov's "Feeling of Power" (https://en.wikipedia.org/wiki/The_Feeling_of_Power) or his almost exactly LLM relevant novella "Profession" ( https://en.wikipedia.org/wiki/Profession_(novella) ).
thanks to git repositories stored away in arctic tunnels our common legacy code heritage might outlast most other human artifacts.. (unless ASI choses to erase that of course)
That’s fine if you find that fun, but legacy archeology is a means to an end, not an end itself.
Legacy archaeology in a 60MiB codebase far easier than digging through email archives, requirements docs, and old PowerPoint files that Microsoft Office won't even open properly any more (though LibreOffice can, if you're lucky). Handwritten code actually expresses something about the requirements and design decisions, whereas AI slop buries that signal in so much noise and makes "archaeology" almost impossible.
When insight from a long-departed dev is needed right now to explain why these rules work in this precise order, but fail when the order is changed, do you have time to git bisect to get an approximate date, then start trawling through chat logs in the hopes you'll happen to find an explanation?
Code is code, yes it can be more or less spaghetti but if it compiles at all, it can be refactored.
Having to dig through all that other crap is unfortunate. Ideally you have tests that encapsulate the specs, which are then also code. And help with said refactors.
We had enough tests to know that no other rule configuration worked. Heck, we had mathematical proof (and a small pile of other documentation too obsolete or cryptic to be of use), and still, the only thing that saved the project was noticing different stylistic conventions in different parts of the source, allowing the minor monolith to be broken down into "this is the core logic" and "these are the parts of a separate feature that had to be weaved into the core logic to avoid a circular dependency somewhere else", and finally letting us see enough of the design to make some sense out of the cryptic documentation. (Turns out the XML held metadata auxiliary to the core logic, but vital to the higher-level interactive system, the proprietary binary encoding was largely a compression scheme to avoid slowing down the core logic, and the system was actually 8-bit-clean from the start – but used its own character encoding instead of UTF-8, because it used to talk to systems that weren't.)
Test-driven development doesn't actually work. No paradigm does. Fundamentally, it all boils down to communication: and generative AI systems essentially strip away all the "non-verbal" communication channels, replacing them with the subtext equivalent of line noise. I have yet to work with anyone good enough at communicating that I can do without the side-channels.
Makes me think that the actual horrific solution here is that every single prompt and output ever made while developing must be logged and stored. As that might be only documentation that exist for what was made.
Actually really thinking, if I was running company allowing or promoting AI use that would be first priority. Whatever is prompted, must be stored forever.
> generative AI systems essentially strip away all the "non-verbal" communication channels
This is a human problem, not a technological one.
You can still have all your aforementioned broken powerpoints etc and use AI to help write code you would’ve previously written simply by hand.
If your processes are broken enough to create unmaintainable software, they will do so regardless of how code pops into existence. AI just speeds it up either way.
The software wasn't unmaintainable. The PowerPoints etc were artefacts of a time when everyone involved understood some implicit context, within which the documentation was clear (not cryptic) and current (not obsolete). The only traces of that context we had, outside the documentation, were minor decisions made while writing the program: "what mindset makes this choice more likely?", "in what directions was this originally designed to extend?", etc.
Personally, I'm in the "you shouldn't leave vital context implicit" camp; but in this case, the software was originally written by "if I don't already have a doctorate, I need only request one" domain experts, and you would need an entire book to provide that context. We actually had a half-finished attempt – 12 names on the title page, a little over 200 pages long – and it helped, but chapter 3 was an introduction-for-people-who-already-know-the-topic (somehow more obscure than the context-free PowerPoints, though at least it helped us decode those), chapter 4 just had "TODO" on every chapter heading, and chapter 5 got almost to the bits we needed before trailing off with "TODO: this is hard to explain because" notes. (We're pretty sure they discussed this in more detail over email, but we didn't find it. Frankly, it's lucky we have the half-finished book at all.)
AI slop lacks this context. If the software had been written using genAI, there wouldn't have been the stylistic consistency to tell us we were on the right track. There wouldn't have been the conspicuous gap in naming, elevating "the current system didn't need that helper function, so they never wrote it" to a favoured hypothesis, allowing us to identify the only possible meaning of one of the words in chapter 3, and thereby learn why one of those rules we were investigating was chosen. (The helper function would've been meaningless at the time, although it does mean something in the context of a newer abstraction.) We wouldn't have been able to used a piece of debugging code from chapter 6 (modified to take advantage of the newer debug interface) to walk through the various data structures, guessing at which parts meant what using the abductive heuristic "we know it's designed deliberately, so any bits that appear redundant probably encode a meaning we don't yet understand".
I am very glad this system was written by humans. Sure, maybe the software would've been written faster (though I doubt it), but we wouldn't have been able to understand it after-the-fact. So we'd have had to throw it away, rediscover the basic principles, and then rewrite more-or-less the same software again – probably with errors. I would bet a large portion of my savings that that monstrosity is correct – that if it doesn't crash, it will produce the correct output – and I wouldn't be willing to bet that on anything we threw together as a replacement. (Yes, I want to rewrite the thing, but that's not a reasoned decision based on the software: it's a character trait.)
I guess I just categorically disagree that a codebase is impossible to understand without “sufficient” additional context. And I think you ascribe too much order to software written by humans that can exist in quite varied groups wrt ability, experience, style, and care.
It was easy to understand what the code was instructing the computer to do. It was harder to understand what that meant, why it was happening, and how to change it.
A program to calculate payroll might be easy to understand, but unless you understand enough about finance and tax law, you can't successfully modify it. Same with an audio processing pipeline: you know it's doing something with Fourier transforms, because that's what the variable names say, but try to tweak those numbers and you'll probably destroy the sound quality. Or a pseudo-random number generator: modify that without understanding how it works, and even if your change feels better, you might completely break it. (See https://roadrunnerwmc.github.io/blog/2020/05/08/nsmb-rng.htm..., or https://redirect.invidious.io/watch?v=NUPpvoFdiUQ if you want a few more clips.)
I've worked with codebases written by people with varying skillsets, and the only occasions where I've been confused by the subtext have been when the code was plagiarised.
> New challenges would come up. If calculators made the arithmetic easy, math challenges move to next higher level. If AI does all the thinking and creativity, human would move to next level. That level could be some menial work which AI can't touch. For example, navigating the complexities of legacy systems and workflows and human interactions needed to keep things working.
You’re gonna work on captcha puzzles and you’re gonna like it.