A lot of the same kind of skill goes into prompting AI and delegating work to other humans. Delegation requires building intellectual empathy for the task recipient, giving them an instruction they can verifiably follow. It requires building trust, and more often than not requires a certain degree of trial/error/watching others work before one can delegate reliably. A lot goes into delegation, and much of this stuff is hard! It's also hard to be delegated to -- especially by someone you haven't worked with before, what is it that they mean when they ask for "more sparkles in the UI" or "I tried C and it didn't work"? Can I guess their background to meet them where they are? The list goes on.
In some ways it's easier to delegate to an AI because you don't have to care for anyone's feelings but your own, and you lose nothing but your own time when things don't go well and you have to reset. On the other hand, when the delegation does not go well, you still got yourself to blame first.
This is very accurate imo - it really is the skill of proper delegation. Same for asking AI questions in an unbiased way so it doesn’t just try to please you - this has made me better at asking questions to people as well!
It’s like a slightly over-eager junior-mid developer, which however doesn’t mind rewriting 30k lines of tests from one framework to another. This means I can let it handle that dirty work, while focusing on the fun and/or challenging parts myself.
I feel like there’s also a meaningful split of software engineers into those who primarily enjoy the process of crafting code itself, and those that primarily enjoy building stuff, treating the code more as a means to an end (even if they enjoy the process of writing code!). The former will likely not have fun with AI, and will likely be increasingly less happy with how all of this evolves over time. The latter I expect are and will mostly be elated.
> It’s like a slightly over-eager junior-mid developer
One with brain damage maybe, I tried out having Claude & Gemini modify a Go program with an absolutely trivial change (change the units displayed in an output type) and it got one of the four lines of code correct (the actual math for the unit conversion) and the rest was incorrect.
In the end, I integrated the helper function it output myself.
SOTA models can generate two or three lines of code accurately at a time and you have to describe them with such specificity that I've usually already done the hard part of the thinking by the time I have a specific enough prompt, that it's easier to just type out the code.
At best they save me looking up a unit conversion formula, which makes them about as useful as a search engine
That sounds very unlike my experience. I frequently get it to modify / create large parts of files at a time, successfully.
> you lose nothing but your own time when things don't go well and you have to reset
Crucially, you lose money with a lot of these models when they output the wrong thing, because you pay by token whether the tokens coming out are what you want or not.
It's a bit like a slot machine. You write your prompt, insert some money, and pull the lever. Sometimes it saves you a lot of time! Sometimes, not so much. Sometimes it gets 80% of the way and you think oh, let me just put in another coin and tweak my prompt and pull the lever, this time it will get me 100%
Listening to people justify pulling the lever over and over again is a little bit like listening to an addict excusing their behavior.
I realize there are flat rate plans like Kagi offers, but the API offerings and IDE integrations all feature the slot machine and sunk cost effects that I describe.