This is not that surprising. First, it depends on how big the YAML files were and what was in them. If you have 200 services, I could easily see 200 YAML files. Second, there are non-service reasons to have YAML files. You might have custom roles, ingresses, volumes, etc. If you do not use something like Helm, you might also have 1 YAML file per environment (not the best idea but it happens).
My suspicion is the original environment (47 Kubernetes clusters, 200 YAML files, unreliable deployments, using 3 clouds, etc.) was not planned out. You probably had multiple teams provisioning infrastructure, half completed projects, and even dead clusters (clusters which were used once but were not destroyed when they were no longer used).
I give the DevOps team in the article a lot of credit for increasing reliability, reducing costs, and increasing efficiency. They did good work.
> If you have 200 services, I could easily see 200 YAML files.
Out of curiosity, in what case would you _not_ see 200 files for 200 services? Even with Helm, you'd write a chart per app wouldn't you?
I've seen much-lauded "Devops" or "platform" teams spend two months writing 500+ files for 3 simple python services, 5 if you include two databases.
We could have spent a tiny fraction of that 10-dev-months to deploy something to production on a bare VM on any cloud platform in a secure and probably very-scalable way.
These days I cringe and shudder everytime I hear someone mentions writing "helm charts", or using the word "workloads".