whalesalad 3 days ago

I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one. Leverage containers. Statelessness.

I don't want to spend time meticulously configuring things beyond the core infrastructure my services run on. I should probably explore FreeBSD more, but honestly, with containers being everywhere now, I'm not seeing a compelling reason to bother. I realize jails are a valid analogue, but broadly speaking the UX is not the same.

All this being said, I have this romantic draw to FreeBSD and want to play around with it more. But every time I set up a basic box I feel teleported back to 2007.

Are there any fun lab projects, posts, educational series targeted at FreeBSD?

6
toast0 2 days ago

> I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one.

I've worked at 'pets not cattle' and 'cattle not pets', and I vastly prefer pets. Yes, you should be able to easily bring up a new pet when you need to; yes, it must be ok if pet1 goes away, never to be seen again. But no, it's not really ok when your servers have an average lifetime of 30 days. It's very hard to offer a stable service on an unstable substrate. Automatic recovery makes sense in some cases, but if the system stops working, there's a problem that needs to be addressed when possible.

> All this being said, I have this romantic draw to FreeBSD and want to play around with it more. But every time I set up a basic box I feel teleported back to 2007.

Like another poster mentioned; this is actually a good thing. FreeBSD respects your investment in knowledge; everything you learned in 2007 still works, and most likely will continue to work. You won't need to learn a new firewall tool every decade, whichever of the three firewalls you like will keep working. You don't need to learn a new tool to configure interfaces, ifconfig will keep working. You don't need to learn a new tool to get network statistics, netstat will keep working. Etc.

unethical_ban 2 days ago

I agree on the "knowledge stability" front. I feel like I have to relearn Linux server networking config every three years because I switched distro or a distro switched their network management middleware.

But.

Having tried to move a machine from rhel 5 to rhel 7, where 12 people had used the server over the past 8 years for any scripting/log analysis/automation, for hosting a bespoke python web request site and a team-specific dokuwiki... The idea of having all that in source control and CICD is alluring.

toast0 2 days ago

You can certainly keep information on your pets and how to rebuild them in source control along with all the procedures used to update them. It's probably a good idea.

Nobody says you can't do CI/CD with pets too. You do have to keep the pets well groomed, of course.

tick_tock_tick 2 days ago

> But no, it's not really ok when your servers have an average lifetime of 30 days. It's very hard to offer a stable service on an unstable substrate.

The whole cattle mindset because at the end of the day everything is a "unstable substrate" your building a stable service on unstable blocks pets don't solve the issue that each pet is fundamentally unstable and your just pretending it's not.

toast0 2 days ago

> The whole cattle mindset because at the end of the day everything is a "unstable substrate" your building a stable service on unstable blocks pets don't solve the issue that each pet is fundamentally unstable and your just pretending it's not.

That's not the way the world has to be. You can have a network that is rock solid. You can have power that is rock solid. You can have hardware that is rock solid.

Sure, if you have a couple thousand machines, a few of them will have hardware problems every year. Yes, once in a while an automatic transfer switch will fail and you'll have a large data center outage. Backhoes exist. Urgent kernel fixes happen. You have to acknowledge failures happen and plan for them, but you should also work to minimize failures, which I honestly haven't seen at the 'cattle not pets' workplaces. Cattle take about two years to get to market [1] (1.5 years before these people receive them, then 180 days before sending them to market); I'd be fine with expecting my servers to run for two years before replacement (and you know, rotating in new servers throughout, maybe swapping out 1/8th of the servers every quarter, etc), but after running for 30 days at 'cattle not pets', I started getting complaints that my systems were running for too long.

[1] https://cultivateconnections.org/how-do-you-determine-when-t...

sgarland 2 days ago

I’ve had Linux servers with > 1 year of uptime. I’ve seen much, much higher. It’s entirely possible to have a stable foundation; it’s modern software that’s hot garbage, and relies on ephemerality to stay running.

yjftsjthsd-h 2 days ago

...right, yes, servers. I've certainly never accidentally forgotten to reboot a laptop on cheap commodity hardware for a few months. Slightly more than a few months. Look, it got rebooted eventually, okay?

graemep 3 days ago

> But every time I set up a basic box I feel teleported back to 2007.

You sat that as though its a bad thing! The author values simplicity.

> I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one. Leverage containers. Statelessness.

Is it less work to write that code and manage "pet"? Are there other advantages?

I think you probably are right about the preferred approach - but what are the advantages of each?

> Statelessness

What about data storage?

yabones 3 days ago

The only thing I currently run on FreeBSD is my storage box. ZFS is absolutely amazing, and FreeBSD supports it fully and without any of the "jank" you'd get running ZFS on Linux. It Just Works (tm), bottom to top. Anything else, I want what I'm familiar with on Linux, like containers and systemd services. I know some people really love pf, but I've been using iptables for so long it would be annoying to switch at this point. So really, it comes down to what you're familiar and comfortable with, and using the right tool for the job.

MisterTea 3 days ago

> ZFS is absolutely amazing, and FreeBSD supports it fully and without any of the "jank" you'd get running ZFS on Linux.

This is why I use FreeBSD as well for my home server, first class ZFS support out of the box. Void Linux musl on my desktop.

I had an old 2TB ZFS array that was part of a trunas setup kicking around for years. I needed to recover some files from it so I hooked all the disks to a motherboard and booted FreeBSD live. I didn't have to do anything, the array was already up and running when I logged in. ezpz.

E39M5S62 2 days ago

ZFS is a first-class citizen on Void Linux, too. There's a lot of care and consideration put into the kernel packages to ensure compatibility with ZFS. ZFSBootMenu is 'native' to Void as well, and the features it provides are quite far ahead of what FreeBSD's bootloader has.

MisterTea 2 days ago

I prefer OS variety and have a mix of Plan 9, Linux, FreeBSD and OpenBSD running my personal stuff.

lunarlull 3 days ago

> without any of the "jank" you'd get running ZFS on Linux.

What jank? Compile it in the kernel of load the module, install the zfs utils, then it's done. Very simple, no complications, where is the jank?

whalesalad 3 days ago

Ostensibly DKMS can be interpreted as jank, for situations where you upgrade your kernel, zfs integration fails or blocks that, and now you are in limbo. At least, I can imagine this being a complaint from someone.

lunarlull 2 days ago

I can see that. I compile ZFS directly into the kernel so I've never really dealt with DKMS issues.

bigstrat2003 2 days ago

I certainly would qualify having to compile it for your kernel as jank.

lunarlull 2 days ago

It's not a different step to have to compile it for my kernel. I patch it in and after that it's a transparent part of compiling a kernel in total.

aborsy 2 days ago

ZFS works on Ubuntu top to the bottom too. It’s installed with a command.

sunshine-o 3 days ago

> Are there any fun lab projects, posts, educational series targeted at FreeBSD?

Klara Systems [0], Vermaden [1] and IT Notes [2] seems to be the most active and popular.

- [0] https://klarasystems.com/articles/

- [1] https://vermaden.wordpress.com/posts/

- [2] https://it-notes.dragas.net/categories/freebsd/

vermaden 2 days ago

> - [0] https://klarasystems.com/articles/

I personally wrote several of them :)

Karrot_Kream 2 days ago

Cattle vs pets always seemed like a silly distinction to me. Fundamentally they're about abstraction levels.

If you treat a server as a "pet", then you typically run multiple services through a service runner (systemd, runit, openrc, etc.) and do only a moderate amount of communication between servers. Here you treat the server as your scheduling substrate upon which your units of compute, services, run. In a "cattle" system, each server is interchangable and you run some isolated service, usually a container, on each of your servers. Here the unit of compute is a container and the compute substrate is the cluster, multiple servers.

In a "pets" system managing many servers is fraught and in a "cattle" system managing many clusters is fraught. So it's simply the abstraction level you want to work at. If you're at the level where your workload fits easily on a single server, or maybe a couple servers, then a pets-like system works fine. If you see the need to horizontally scale then a cattle-like system is what you need. There's a practical problem right now in that containerization ecosystems are usually heavy enough that it takes advanced tools (like bubblewrap or firejail) to isolate your services on a single service which offers the edge to cattle-like systems.

In my experience, many smaller services with non-critical uptime requirements can run just fine on a single server, maybe just moving the data store externally so that failure of the service and failure of the data store are independent.

denkmoon 2 days ago

BSD is for those core infrastructure that you want to meticulously configure. If the system is cattle who cares what OS is underneath