"AI" is just a trick to circumvent the copyright laws that are the main brake in writing quickly programs.
The "AI" generated code is just code extracted from various sources used for training, which could not be used by a human programmer because most likely they would have copyrights incompatible with the product for which "AI" is used.
All my life I could have written much faster any commercial software if I had been free to just copy and paste any random code lines coming from open-source libraries and applications, from proprietary programs written for former employers or from various programs written by myself as side projects with my own resources and in my own time, but whose copyrights I am not willing to donate to my current employer, so that I would no longer be able to use in the future my own programs.
I could search and find suitable source code for any current task as fast and with much greater reliability than by prompting an AI application. I am just not permitted to do that by the existing laws, unlike the AI companies.
Already many decades ago, it was claimed that the solution for enhancing programmer productivity is more "code reuse". However "code reuse" has never happened at the scale imagined in the distant past, but not because of technical reasons, but due to the copyright laws, whose purpose is exactly to prevent code reuse.
Now "AI" appears to be the magical solution that can provide "code reuse" at the scale dreamed a half of century ago, by escaping from the copyright constraints.
When writing a program for my personal use, I would never use an AI assistant, because it cannot accelerate my work in any way. For boilerplate code, I use various templates and very smart editor auto-completion, there is no need of any "AI" for that.
On the other hand, when writing a proprietary program, especially for some employer that has stupid copyright rules, e.g. not allowing the use of libraries with different copyrights, even when those copyrights are compatible with the requirements of the product, then I would not hesitate to prompt an AI assistant, in order to get code stripped of copyright, saving thus time over rewriting an equivalent code just for the purpose of enabling it to be copyrighted by the employer.
Not sure why this is downvoted. People forget or weren’t around for the early 2000s when companies were absolutely preoccupied with code copyright and terrified of lawsuits. That loosened up only slightly during the GitHub/StackOverflow era.
If you proposed something like GitHub Copilot to any company in 2020, the legal department would’ve nuked you from orbit. Now it’s ok because “everyone is doing it and we can’t be left behind”.
Edit: I just realized this was a driver for why whiteboard puzzles became so big - the ideal employee for MSFT/FB/Google etc was someone who could spit out library quality, copyright-unencumbered, “clean room” code without access to an internet connection. That is what companies had to optimize for.
It's downvoted because it's plainly incorrect.
What part is incorrect?
The claim that it's just spitting out code it's been trained on. That is simply not the case, broadly speaking - sure, if you ask it for a very specific algorithm that has a well-known implementation, you might end up with such a snippet, but in general, it writes new code, not just a copy/paste of SO or whatever.
This is an extremely important point, and first time I see it mentioned with regards to software copyright. Remember the days where companies got sued for including GPL'd code in their proprietary products?