blibble 14 hours ago

is this really a big deal given you run ./configure once?

it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

4
Aurornis 13 hours ago

> is this really a big deal given you run ./configure once

I end up running it dozens of times when changing versions, checking out different branches, chasing dependencies.

It’s a big deal.

> it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

5 minute POST time is a bad analogy. systemd is used in many places, from desktops (that POST quickly) to embedded systems where boot time is critical.

If deterministic boot is important then you would specify it explicitly. Relying on emergent behavior for consistent boot order is bad design.

The number of systems that have 5 minute POST times and need deterministic boot is an edge case of an edge case.

Twirrim 10 hours ago

>chasing dependencies.

This aspect of configure, in particular, drives me nuts. Obviously I'd like it to be faster, but it's not the end of the world. I forget what I was trying to build the other week, but I had to make 18 separate runs of configure to find all the things I was missing. When I dug into things it looked like it could probably have done it in 2 runs, each presenting a batch of things that were missing. Instead I got stuck with "configure, install missing package" over and over again.

PinkSheep 4 hours ago

Exactly. Multiply this with the time it takes for one run on a slow machine. Back in the day, I ran a compilation on my phone as it was the target device. Besides the compilation taking 40 minutes (and configure had missed a thing or two), the configure step itself took a minute or so. Because I don't know all the moving parts, I prefer start from scratch than running into obscure problems later on.

Arguing against parallelization of configure is like arguing against faster OS updates. "It's only once a week/whatever, come on!" Except it's spread over a billion of people time and time again.

blibble 13 hours ago

> to embedded systems where boot time is critical.

if it's critical on an embedded system then you're not running systemd at all

> The number of systems that have 5 minute POST times and need deterministic boot is an edge case of an edge case.

desktop machines are the edge case, there's a LOT more servers running Linux than people using Linux desktops

> Relying on emergent behavior for consistent boot order is bad design.

tell that to the distro authors who 10 years in can't tell the difference between network-online.target, network-pre.target, network.target

MindSpunk 13 hours ago

And a very large number of those Linux servers are running Linux VMs, which don't POST, use systemd, and have their boot time dominated by the guest OS. Those servers are probably hosting dozens of VMs too. Boot time makes a lot of difference here.

blibble 13 hours ago

seabios/tianocore still takes longer than /etc/rc on a BSD

amdahl's law's a bitch

0x457 13 hours ago

> from desktops (that POST quickly)

I take you don't run DDR5?

mschuster91 13 hours ago

> I end up running it dozens of times when changing versions, checking out different branches, chasing dependencies.

Yeah... but neither of that is going to change stuff like the size of a data type, the endianness of the architecture you're running on, or the features / build configuration of some library the project depends on.

Parallelization is a bandaid (although a sorely needed!) IMHO, C/C++ libraries desperately need to develop some sort of standard that doesn't require a full gcc build for each tiny test. I'd envision something like nodejs's package.json, just with more specific information about the build details themselves. And for the stuff like datatype sizes, that should be provided by gcc/llvm in a fast-parseable way so that autotools can pick it up.

o11c 12 hours ago

There is the `-C` option of course. It's supposedly good for the standard tests that waste all the time, but not so much for the ad-hoc tests various projects use, which have an unfortunate chance of being buggy or varying across time.

... I wonder if it's possible to manually seed a cache file with only known-safe test results and let it still perform the unsafe tests? Be sure to copy the cache file to a temporary name ...

---

I've thought about rewriting `./configure` in C (I did it in Python once but Python's portability turned out to be poor - Python2 was bug-free but killed; Python3 was unfixably buggy for a decade or so). Still have a stub shell script that reads HOSTCC etc. then quickly builds and executes `./configure.bin`.

LegionMammal978 13 hours ago

If you do a lot of bisecting, or bootstrapping, or building compatibility matrices, or really anything that needs you to compile lots of old versions, the repeated ./configure steps really start feeling like a drag.

kazinator 12 hours ago

In a "reasonably well-behaved program", if you have the artifacts from a current configure, like a "config.h" header, they are compatible with older commits, even if configurations changed, as long as the configuration changes were additive: introducing some new test, along with a new symbol in "config.h".

It's possible to skip some of the ./configure steps. Especially for someone who knows the program very well.

LegionMammal978 10 hours ago

Perhaps you can get away with that for small, young, or self-contained projects. But for medium-to-large projects running more than a few years, the (different versions of) external or vendored dependencies tend to come and go, and they all have their own configurations. Long-running projects are also prone to internal reorganizations and overhauls to the build system. (Go back far enough, and you're having to wrangle patchsets for every few months' worth of versions since -fpermissive is no longer permissive enough to get it to build.)

asah 11 hours ago

For postgresql development, you run configure over and over...

csdvrx 13 hours ago

> it's like systemd trading off non-determinism for boot speed, when it takes 5 minutes to get through the POST

That's a bad analogy: if a given deterministic service ordering is needed for a service to correctly start (say because it doesn't start with the systemd unit), it means the non-deterministic systemd service units are not properly encoding the dependencies tree in the Before= and After=

When done properly, both solutions should work the same. However, the solution properly encoding the dependency graph (instead of just projecting it on a 1-dimensional sequence of numbers) will be more flexible: it's the better solution, because it will give you more speed but also more flexibility: you can see the branches any leaf depends on, remove leaves as needed, then cull the useless branches. You could add determinism if you want, but why bother?

It's like using the dependencies of linux packages, and leaving the job of resolving them to package managers (apt, pacman...): you can then remove the useless packages which are no longer required.

Compare that to doing a `make install` of everything to /usr/local in a specific order, as specified by a script: when done properly, both solutions will work, but one solution is clearly better than the other as it encodes more finely the existing dependencies instead of projecting them to a sequence.

You can add determinism if you want to follow a sequence (ex: `apt-get install make` before adding gcc, then add cuda...), or you can use meta package like build-essentials, but being restricted to a sequence gains you nothing.

blibble 13 hours ago

I don't think it is a bad analogy

given how complicated the boot process is ([1]), and it occurs once a month, I'd rather it was as deterministic as possible

vs. shaving 1% off the boot time

[1]: distros continue to ship subtlety broken unit files, because the model is too complicated

Aurornis 13 hours ago

Most systems do not have 5 minute POST times. That’s an extreme outlier.

Linux runs all over, including embedded systems where boot time is important.

Optimizing for edge cases on outliers isn’t a priority. If you need specific boot ordering, configure it that way. It doesn’t make sense for the entire Linux world to sacrifice boot speed.

timcobb 12 hours ago

I don't even think my Pentium 166 took 5 minutes to POST. Did computers ever take that long to POST??

yjftsjthsd-h 11 hours ago

Old machines probably didn't, no, but I have absolutely seen machines (Enterprise™ Servers) that took longer than that to get to the bootloader. IIRC it was mostly a combination of hardware RAID controllers and RAM... something. Testing?

lazide 10 hours ago

It takes awhile to enumerate a couple TB worth of RAM dimms and 20+ disks.

yjftsjthsd-h 10 hours ago

Yeah, it was somewhat understandable. I also suspect the firmware was... let's say underoptimized, but I agree that the task is truly not trivial.

lazide 9 hours ago

One thing I ran across when trying to figure this out previously - while some firmware is undoubtably dumb, a decent amount of it was that it was doing a lot more than typical PC firmware.

For instance, the slow RAM check POST I was experiencing is because it was also doing a quick single pass memory test. Consumer firmware goes ‘meh, whatever’.

Disk spin up, it was also staging out the disk power ups so that it didn’t kill the PSU - not a concern if you have 3-4 drives. But definitely a concern if you have 20.

Also, the raid controller was running basic SMART tests and the like. Which consumer stuff typically doesn’t.

Now how much any of this is worthwhile depends on the use case of course. ‘Farm of cheap PCs’ type cloud hosting environments, most these types of conditions get handled by software, and it doesn’t matter much if any single box is half broken.

If you have one big box serving a bunch of key infra, and reboot it periodically as part of ‘scheduled maintenance’ (aka old school on prem), then it does.

BobbyTables2 12 hours ago

Look at enterprise servers.

Competing POST in under 2 minutes is not guaranteed.

Especially the 4 socket beasts with lots of DIMMs.

Twirrim 10 hours ago

Physical servers do. It's always astounding to me how long it takes to initialise all that hardware.

kcexn 12 hours ago

Oh? What's an example of a common way for unit files to be subtlely broken?

juped 4 hours ago

See: the comment above and its folkloric concept of systemd as some kind of constraint solver

Unfortunately no one has actually bothered to write down how systemd really works; the closest to a real writeup out there is https://blog.darknedgy.net/technology/2020/05/02/0/