csdvrx 3 days ago

> But the usual Linux hobgoblins listed above are a red herring here, to my mind.

Absolutely

> If containers are the reason, then again, they are not a requirement. But they are pretty similar to BSD's jails. I don't think they are particularly complex.

The only point I agree with the author is that many things are shipped to be used with docker when they don't need to be, which creates a needless dependency.

2
n3storm 2 days ago

I have "reversed engineered" dockerfiles in order to avoid containers. Any software should be installable without docker, it just takes more knowledge and time. Also sometimes it doesn't, there is a binary (like with go and rust and .net) or other times the long route is pip or apt and some conf fiddling. Databases are the worse part maybe but once you get it is more control for you and what you want to do with your setup. Moving database server to other dir o server? no prob. Sometimes dockerfile deploys postgresql when you can configure it for home a simple sqlite. If you end up modifying the dockerfile you understand what are the application requirements are and you can install raw.

dingi 2 days ago

Well, containers have some other uses cases. Running old software not supported by latest Linux distros is one of them. MySQL 5.7 series cannot be installed on latest Linuxes cleanly for quite a while now. Containers are godsend for situations like this.

fragmede 2 days ago

that is absolutely fascinating. why do you want to avoid containers?

bigfishrunning 2 days ago

I'm not the parent poster, but I also like to avoid containers when I can. For instance, if there is a bug in some library or common dependency (think libssl or bash) it's easy to update it in one place rather then make sure a whole bunch of containers get updated. Also, when writing software I find that targeting a container keeps you from thinking about portability (by intrinsically avoiding the "it works on my machine" problem) and results in a more fragile end product.

majormajor 2 days ago

If you aren't getting the binary from your repo's package manager the "update in one place for bugfixes" thing often no longer applies. At least with a container management system the various not-distro-managed things have something akin to a standard way to version bump them vs "go download this from that ftp, go pull this from that repo, etc."

yjftsjthsd-h 2 days ago

As someone who does use containers: It depends™ on how you do things, but lots of containers are used as a way to consume mystery meat easily. Who made that image? What's in it? Do you trust the binaries in it? How often does it get updates? Are you keeping up with updates that are available? All of these are solvable, of course, but a lot of containers are "just docker run randomsource/whatever:latest and never think about it again".

nine_k 2 days ago

One reason could be bringing in auto-upgradable dependencies: much less to rebuild when a security patch release is issued.

This is doable with containers, too, if you agree to maintain and build them yourself.

n3storm 2 days ago

Most straitforward reason is not having multiple database server and reuse one Pgsql or Mariadb for several instances of same app.

chillfox 2 days ago

I used to run FreeBSD on my home server and switched it over to Alpine Linux (with ZFS) because everything I wanted to run came as a docker containers and it was just easier to use docker compose to manage all the apps.