We've just applied a helm chart a while back. It just works. We maybe had like a few incidents over the years, requiring stuff like Kafka queues to be wiped.
The argument that you have to read a sh script doesn't make sense to me. Are you gonna read source code of any software is referenced in this script or any you download too? No? What's the difference between that and a bash script, at the end of the day both can do damage.
We used the helm chart but things didn't get updated often enough to keep our container security stuff happy.
Helm is a huge pain in the butt if you have mitigation obligations because the overall supply chain for a 1-command install can involve several different parties, who all update things at different frequencies :/
So chart A includes subchart B, which consumes an image from party C, who haven't updated to foobar X yet. You either need to wait for 3 different people to update stuff to get mainline fixed, or you roll up your sleeves and start rebuilding things, hosting your own images and forking charts. At first you build 1 image and set a value but the problem grows over time.
If you update independently you end up running version combinations of software that the OG vendor has never tested.
This is not helm's fault of course; it's just the reality of deploying software with a lot of moving parts.
Rereading that section, I'd agree it's probably not the best-argued point because it implies security concerns... I guess what I'm saying is: for something I'm setting up to keep around for a while, I'd like to know a bit what's in the package before I deploy it. In that sense, the shell script serves as a table of contents... and if the table of contents is 800 lines, that makes me wonder how many moving parts there are and how many things might break at inconvenient times because of that.
For me I would just run it on a clean cluster/VM somewhere (to be destroyed after that) just to see what happens. If you have no local resources to spare, an hour of even very high end (to save time) VMs/cluster at a provider e.g. AWS costs next to nothing
That solution didn't apply for me at the time, since I was in an environment that combined security-consciousness with thick layers of bureaucracy, meaning that hardware came at a premium (and had to be on premise).
Sure, but I'm not suggesting running there, just testing there. We also have to run in specific providers in specific location, but nothing stops us from renting a clean large VM in AWS for an hour or two, for testing stuff without using any customer data. Hell, that costs pretty much nothing so if my employer didn't allowed it, I would just pay with my own money - it's much better for your work efficiency to work out the kinks of this without having to do 10 cleanups after failed deployment, it's much easier than to just delete a VM.
Oh and the most difficult part when setting up, from what I remember, was setting up GitHub SSO and GitHub and Slack integration as it wasn't well documented.