Of course, but this doesn’t undermined the OPs point of „who allows the AI to do stuff without reviewing it“. Even WITHOUT the „vulnerability“ )if we call it that), AI may always create code that may be vulnerable in some way. The vulnerability certainly increases the risk a lot and hence is a risk and also should be addressed in text files showing all characters, but AI code always needs to be reviewed - just as human code.
The point is this: vulnerable code often makes it to production, despite the best intentions of virtually all people writing and reviewing the code. If you add a malicious actor standing on the shoulder of the developers suggesting code to them, it is virtually certain that you will increase the amount of vulnerable and/or malicious code that makes it into production, statistically speaking. Sure, you have methods to catch much of these. But as long as your filters aren't 100% effective (and no one's filters are 100% effective), then the more garbage you push through them, the more garbage you'll get out.
the OPs point about who allows the AI to do stuff without reviewing it is undermined by reality in multiple ways
1. a dev may be using AI and nobody knows, and they are trusted more than AI, thus their code does not get as good a review as AI code would.
2. People review code all the time and subtle bugs creep in. It is not a defense against bugs creeping in that people review code. If it were there would be no bugs in organizations that review code.
3. people may not review or look only for a second based on it's a small ticket. They just changed dependencies!
more examples left up to reader's imagination.