> Not only that, the generated code was high-quality, efficient, and conformed to my coding guidelines. It routinely "checked its work" by running unit tests to eliminate hallucinations and bugs.
This seems completely out of whack with my experience of AI coding. I'm definitely in the "it's extremely useful" camp but there's no way I would describe its code as high quality and efficient. It can do simple tasks but it often gets things just completely wrong, or takes a noob-level approach (e.g. O(N) instead of O(1)).
Is there some trick to this that I don't know? Because personally I would love it if AI could do some of the grunt work for me. I do enjoy programming but not all programming.
Which model and tool are you using? There's a whole spectrum of AI-assisted coding.
ChatGPT, Claude (both through the website), and Github Copilot (paid if it makes any difference).
I use the same with a sprinkling of Gemini 2.5 and Grok3.
I find it they all make errors, but 95% of them I spot immediately by eye and either correct manually or reroll through prompting.
The error rate has gone down in the last 6 months, though, and the efficiency of the C# code I mostly generate has gone up by an order of magnitude. I would rarely produce code that is more efficient than what AI produces now. (I have a prompt though that tells it to use all the latest platform advances and to search the web first for the latest updates that will increase the efficiency and size of the code)