AlexITC 5 days ago

Not everything needs K8s, over the years I have worked with many different deployment approaches, the way I do deploy hugely depends on the kind of work I'm doing but my rule is to keep it as simple as possible.

In my own projects I have stayed the longest with Ansible, once the scripts are built, you can use them to deploy most web apps in the same way and stuff rarely breaks.

For websites I have switched away from ansible to simple shell-scripts ("npm run build && scp ..."), I have also done this for web apps but it starts getting a bit more complex when doing healthchecks/rollbacks.

In general, most of my work involes web apps and I start with this and grow from there:

- Monolith backend + Postgres + same language for backend and frontend with shared code.

- Small Linux server within a cloud with fixed pricing (like DigitalOcean) with backups enabled.

- When the project allows it, postgres is installed in the VM (backups help to recover data and keep the price small).

- Use nginx as the entrypoint to the app, this is very flexible once you are used to it, for example, you can do caching + rate limit with simple configuration.

- Use certbot to get the SSL certificate.

- Use systemd to keep the app running.

- A cheap monitoring service to keep pinging my app.

- Deploys are triggered from my computer unless it is justified to delegate this to the CI.

It's been a while since I have found Ansible to be too slow and I have been willing to complete building my with a general-purpose tool for deploying webapps this way but I have no idea if I'll be ever done with this.

Perhaps the most important project I used to run with this approach is a custom block-explorer API which indexed Bitcoin + a few other cryptocurrencies and it scaled well with a single-VM (nginx aggressive caching for immutable data helped a lot), this means that the postgres storage required more than 1TB.

1
pseudocomposer 3 days ago

I’m also a DigitalOcean user, but I prefer managed K8s and don’t think there will ever be a reason to go back to having to deal with host OS things again. I’d rather just pay for my CPU/RAM, and give it Docker images to run, than worry about all that. And DOKS (DigitalOcean K8s) doesn’t cost any more than bare DigitalOcean boxes.

Cert-Manager is a CertBot-compatible K8s service that “just works” with deployed services. Nginx ingresses are a pretty standard thing there too. Monitoring is built-in. And with a few API keys, it’s easy to do things like deploy from GitHub actions when you push a commit to main, after running tests.

And perhaps most importantly, managed Kubernetes services let you attach storage for DB and clusters with standard K8s APIs (the only thing provider-/DigitalOcean-specific is the names of the storage service tiers). Also the same price as standard DigitalOcean storage with all their standard backups… but again, easier to set up, and standardized so that if DigitalOcean ever gets predatory, it’s easy enough to migrate to any of a dozen other managed K8s services.