Especially "clever" code that is AI generated!
At least with human-written clever code you can trust that somebody understood it at one point but the idea of trusting AI generated code that is "clever" makes my skin crawl
Also, the ways in which a (sane) human will screw-up tend to follow internal logic that other humans have learned to predict, recognize, or understand.
Most devs I've worked with are sane, unfortunately the rare exceptions were not easy to predict or understand.
Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?
And was the code they were writing before they had an LLM any better?
> Who are all these all these engineers who just take whatever garbage they are suggested, and who, without understanding it, submit it in a CL?
My guess would be engineers who are "forced" to use AI, already mailed management it would be an error and are interviewing for their next company. Malicious compliance: vibe code those new features and let maintainability and security be a problem for next employees / consultants.