> I notice FreeBSD admins tend to follow a 'pets not cattle' approach, carefully nurturing individual systems. Linux admins like myself typically prefer the 'cattle not pets' mindset—using infrastructure-as-code where if a server dies, no problem, just spin up another one.
I've worked at 'pets not cattle' and 'cattle not pets', and I vastly prefer pets. Yes, you should be able to easily bring up a new pet when you need to; yes, it must be ok if pet1 goes away, never to be seen again. But no, it's not really ok when your servers have an average lifetime of 30 days. It's very hard to offer a stable service on an unstable substrate. Automatic recovery makes sense in some cases, but if the system stops working, there's a problem that needs to be addressed when possible.
> All this being said, I have this romantic draw to FreeBSD and want to play around with it more. But every time I set up a basic box I feel teleported back to 2007.
Like another poster mentioned; this is actually a good thing. FreeBSD respects your investment in knowledge; everything you learned in 2007 still works, and most likely will continue to work. You won't need to learn a new firewall tool every decade, whichever of the three firewalls you like will keep working. You don't need to learn a new tool to configure interfaces, ifconfig will keep working. You don't need to learn a new tool to get network statistics, netstat will keep working. Etc.
I agree on the "knowledge stability" front. I feel like I have to relearn Linux server networking config every three years because I switched distro or a distro switched their network management middleware.
But.
Having tried to move a machine from rhel 5 to rhel 7, where 12 people had used the server over the past 8 years for any scripting/log analysis/automation, for hosting a bespoke python web request site and a team-specific dokuwiki... The idea of having all that in source control and CICD is alluring.
You can certainly keep information on your pets and how to rebuild them in source control along with all the procedures used to update them. It's probably a good idea.
Nobody says you can't do CI/CD with pets too. You do have to keep the pets well groomed, of course.
> But no, it's not really ok when your servers have an average lifetime of 30 days. It's very hard to offer a stable service on an unstable substrate.
The whole cattle mindset because at the end of the day everything is a "unstable substrate" your building a stable service on unstable blocks pets don't solve the issue that each pet is fundamentally unstable and your just pretending it's not.
> The whole cattle mindset because at the end of the day everything is a "unstable substrate" your building a stable service on unstable blocks pets don't solve the issue that each pet is fundamentally unstable and your just pretending it's not.
That's not the way the world has to be. You can have a network that is rock solid. You can have power that is rock solid. You can have hardware that is rock solid.
Sure, if you have a couple thousand machines, a few of them will have hardware problems every year. Yes, once in a while an automatic transfer switch will fail and you'll have a large data center outage. Backhoes exist. Urgent kernel fixes happen. You have to acknowledge failures happen and plan for them, but you should also work to minimize failures, which I honestly haven't seen at the 'cattle not pets' workplaces. Cattle take about two years to get to market [1] (1.5 years before these people receive them, then 180 days before sending them to market); I'd be fine with expecting my servers to run for two years before replacement (and you know, rotating in new servers throughout, maybe swapping out 1/8th of the servers every quarter, etc), but after running for 30 days at 'cattle not pets', I started getting complaints that my systems were running for too long.
[1] https://cultivateconnections.org/how-do-you-determine-when-t...
I’ve had Linux servers with > 1 year of uptime. I’ve seen much, much higher. It’s entirely possible to have a stable foundation; it’s modern software that’s hot garbage, and relies on ephemerality to stay running.
...right, yes, servers. I've certainly never accidentally forgotten to reboot a laptop on cheap commodity hardware for a few months. Slightly more than a few months. Look, it got rebooted eventually, okay?