I don't understand why they don't just statically link their binaries. First, they said this:
> Even if you managed to statically link GLIBC—or used an alternative like musl—your application would be unable to load any dynamic libraries at runtime.
But then they immediately said they actually statically link all of their deps aside from libc.
> Instead, we take a different approach: statically linking everything we can.
If they're statically linking everything other than libc, then using musl or statically linking glibc will finish the job. Unless they have some need for loading share libs at runtime which they didn't already have linked into their binary (i.e. manual dlopen), this solves the portability problem on Linux.
What am I missing (assuming I know of the security implications of statically linked binaries -- which they didn't mention as a concern)?
And please, statically linking everything is NOT a solution -- the only reason I can run some games from 20 years ago still on my recent Linux is because they didn't decide to stupidly statically link everything, so I at least _can_ replace the libraries with hooks that make the games work with newer versions.
As long as the library is available.
Neither static nor dynamic linking is looking to solve the 20 year old binaries issue, so both will have different issues.
But I think it's easier for me to find a 20 year old ISO of a Red Hat/Slackware where I can simply run the statically linked binary. Dependency hell for older distros become really difficult when the older packages are not archived anywhere anymore.
I've recently had to do this (to bisect when a change introduced a superficial bug into a 20-year-old program). I think "simply run" is viewing Linux of that era through rose-tinted glasses.
Even for simple 2D "Super VGA" you're needing to choose the correct XFree86 implementation and still tweak your Xorg configuration. The emulated hardware also has bugs, since most of the focus is now on virtio drivers.
(The 20-year-old program was linked against libsdl, which amusingly means on my modern system it supports Wayland with no issues.)
Plus there are other advantages that I always say should never be discounted, such as desktop integration, that you get when you can replace the toolkit (dynamic) libraries rather than just "running everything in a VM". Running everything in a VM is a poor excuse for backwards compatibility.
It's interesting to think how a 20 year old OS plus one program is probably a smaller bundle size than many modern Electron apps ostensibly built "for cross platform compatibility". Maybe microkernels are the way.
Old DOS games are often sold (e.g. on GOG or Steam) with an installer and a full copy of DOSBox, so you get an entire operating system plus hardware emulator to run the game. Looks like this adds less than 10 MB to each game installed in Windows.
How should a microkernel run (WASI) WASM runtimes?
Docker can run WASM runtimes, but I don't think podman or nerdctl can yet.
From https://news.ycombinator.com/item?id=38779803 :
docker run \
--runtime=io.containerd.wasmedge.v1 \
--platform=wasi/wasm \
secondstate/rust-example-hello
From https://news.ycombinator.com/item?id=41306658 :> ostree native containers are bootable host images that can also be built and signed with a SLSA provenance attestation; https://coreos.github.io/rpm-ostree/container/ :
rpm-ostree rebase ostree-image-signed:registry:<oci image>
rpm-ostree rebase ostree-image-signed:docker://<oci image>
Native containers run on the host and can host normal containers if a container engine is installed. Compared to an electron runtime, IDK how minimal a native container with systemd and podman, and WASM runtimes, and portable GUI rendering libraries could be.CoreOS - which was for creating minimal host images that host containers - is now Fedora Atomic is now Fedora Atomic Desktops and rpm-ostree. Silverblue, Kinoite, Sericea; and Bazzite and Secure Blue.
Secureblue has a hardened_malloc implementation.
From https://jangafx.com/insights/linux-binary-compatibility :
> To handle this correctly, each libc version would need a way to enumerate files across all other libc instances, including dynamically loaded ones, ensuring that every file is visited exactly once without forming cycles. This enumeration must also be thread-safe. Additionally, while enumeration is in progress, another libc could be dynamically loaded (e.g., via dlopen) on a separate thread, or a new file could be opened (e.g., a global constructor in a dynamically loaded library calling fopen).
FWIU, ROP Return-Oriented Programming and Gadgets approaches have implementations of things like dynamic header discovery of static and dynamic libraries at runtime; to compile more at runtime (which isn't safe, though: nothing reverifies what's mutated after loading the PE into process space, after NX tagging or not, before and after secure enclaves and LD_PRELOAD (which some go binaries don't respect, for example).
Can a microkernel do eBPF?
What about a RISC machine for WASM and WASI?
"Customasm – An assembler for custom, user-defined instruction sets" (2024) https://news.ycombinator.com/item?id=42717357
Maybe that would shrink some of these flatpaks which ship their own Electron runtimes instead of like the Gnome and KDE shared runtimes.
Python's manylinux project specifies a number of libc versions that manylinux packages portably target.
Manylinux requires a tool called auditwheel for Linux, delicate for MacOS, and delvewheel for windows;
Auditwheel > Overview: https://github.com/pypa/auditwheel#overview :
> auditwheel is a command line tool to facilitate the creation of Python wheel packages for Linux (containing pre-compiled binary extensions) that are compatible with a wide variety of Linux distributions, consistent with the PEP 600 manylinux_x_y, PEP 513 manylinux1, PEP 571 manylinux2010 and PEP 599 manylinux2014 platform tags.
> auditwheel show: shows external shared libraries that the wheel depends on (beyond the libraries included in the manylinux policies), and checks the extension modules for the use of versioned symbols that exceed the manylinux ABI.
> auditwheel repair: copies these external shared libraries into the wheel itself, and automatically modifies the appropriate RPATH entries such that these libraries will be picked up at runtime. This accomplishes a similar result as if the libraries had been statically linked without requiring changes to the build system. Packagers are advised that bundling, like static linking, may implicate copyright concerns
github/choosealicense.com: https://github.com/github/choosealicense.com
From https://news.ycombinator.com/item?id=42347468 :
> A manylinux_x_y wheel requires glibc>=x.y. A musllinux_x_y wheel requires musl libc>=x.y; per PEP 600
Return oriented programming: https://en.wikipedia.org/wiki/Return-oriented_programming
/? awesome return oriented programming sire:github.com https://www.google.com/search?q=awesome+return+oriented+prog...
This can probably find multiple versions of libc at runtime, too: https://github.com/0vercl0k/rp :
> rp++ is a fast C++ ROP gadget finder for PE/ELF/Mach-O x86/x64/ARM/ARM64 binaries.
> How should a microkernel run (WASI) WASM runtimes?
Same as any other kernel—the runtime is just a userspace program.
> Can a microkernel do eBPF?
If it implements it, why not?
Should a microkernel implement eBPF and WASM, or, for the same reasons that justify a microkernel should eBPF and most other things be confined or relegated or segregated in userspace; in terms of microkernel goals like separation of concerns and least privilege and then performance?
Linux containers have process isolation features that userspace sandboxes like bubblewrap and runtimes don't.
Flatpaks bypass selinux and apparmor policies and run unconfined (on DAC but not MAC systems) because the path to the executable in the flatpaks differs from the system policy for */s?bin/* and so wouldn't be relabeled with the necessary extended filesystem attributes even on `restorecon /` (which runs on reboot if /.autorelabel exists).
Thus, e.g. Firefox from a signed package in a container on the host, and Firefox from a package on the host are more process-isolated than Firefox in a Flatpak or from a curl'ed statically-linked binary because one couldn't figure out the build system.
Container-selinux, Kata containers, and GVisor further secure containers without requiring the RAM necessary for full VM virtualization with Xen or Qemu; and that is possible because of container interface standards.
Linux machines run ELF binaries, which could include WASM instructions
/? ELF binary WASM : https://www.google.com/search?q=elf+binary+wasm :
mewz-project/wasker https://github.com/mewz-project/wasker :
> What's new with Wasker is, Wasker generates an OS-independent ELF file where WASI calls from Wasm applications remain unresolved.*
> This unresolved feature allows Wasker's output ELF file to be linked with WASI implementations provided by various operating systems, enabling each OS to execute Wasm applications.
> Wasker empowers your favorite OS to serve as a Wasm runtime!
Why shouldn't we container2wasm everything? Because (rootless) Linux containers better isolate the workload than any current WASM runtime in userspace.
Debian archives all of our binaries (and source) here:
Some things built on top of that:
https://manpages.debian.org/man/debsnap https://manpages.debian.org/man/debbisect https://wiki.debian.org/BisectDebian https://metasnap.debian.net/ https://reproduce.debian.net/
All since ~mid 2005. Unfortunately a tarball I've been looking for was added before then, but replaced before a release was cut (so it's also missing from those archives).
Which one?
The OpenAL source tarball that was imported into Debian as version "0.0.200101102349-1".
Thats an incredibly old version, I think you are going to be out of luck and wonder what you need from it.
That said, a couple things to try though:
Email Dan Helfman <[email protected]> and ask if they have a copy lying around in backups anywhere. They aren't a Debian member any more but incredibly are still posting to the Debian BTS occasionally, as upstream developer of borgmatic.
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1056364#10 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1096005#10
Contact the Icculus folks, the CVS server mentioned in debian/copyright of the oldest version of openal on Debian snapshot points to the Icculus server. Won't get you the debian/ directory, but could possibly get you some sort of CVS access.
Yeah, I reached out to Dan a few months back. Unfortunately, didn't have a copy anymore (and they were awesome enough to even check backups, props!). At this point I'd be happy with just the headers, the ones from "0.2001061600-2.3" just a few months later seem different enough.
Not a blocker for what I'm working on, but more of an itch to track down the lost version.
Dynamic linking obviously has benefits or there would not be any incentive to build them or provide the capacity for them.
The problem is they also have problems which motivates people to statically link.
I remember back in the Amiga days when there were multiple libraries that provided file requesters. At one point I saw a unifying file requester library that implemented the interfaces of multiple others so that all requesters had the same look.
It's something that hasn't been done as far as I am aware on Linux. partially because of the problems with Linux dynamic libraries.
I think the answer isn't just static linking.
I think the solution is a commitment.
If you are going to make a dynamic library, commit to backwards compatibility. If you can't provide that, that's ok, but please statically link.
Perhaps making a library at a base level with a forever backwards compatible interface with a static version for breaking changes would help. That might allow for a blend of bug support and adding future features.
At least for some apps, perhaps it’s Wine and the Win32 API which is the answer.
Static linking makes it impossible for a library to evolve external to applications. That’s not a great outcome for a lot of reasons.
musl and glibc static links are their own Pandora’s box of pain and suffering. They don’t “just work” like you’d hope and dream.
This blows my mind, that in 2025 we still struggle with a simple task such as "read in a string, parse it, and query a cascade of resolvers to discover it's IP". I just can't fathom how that is a difficult problem, or why DNS still is notorious for causing so much pain and suffering. Compared to the advancements in hardware and graphics and so many other areas.
There are resolvers for not just DNS but for users and other lookups. The list of resolvers is dynamic, they are configured in /etc/nsswitch.conf. The /etc/hosts lookup is part of the system.
Where do the resolvers come from? It needs to be possible to install resolvers separately and dynamically load them. Unless you want to have NIS always installed. Better to install LDAP for those who need it.
It could be handled by a system daemon instead. We don't really need this in-process all the time.
Various things including name (DNS) resolution rely on dynamic linking.
Are you saying that a statically linked binary cannot make an HTTP request to `google.com` because it would be unable to resolve the domain name?
There are entire distros, like alpine, built on musl. I find this very hard to believe.
All versions of MUSL prior to 1.2.4 (released less than two years ago) would indeed fail to perform DNS lookups in many common cases, and a lot of programs could not run in MUSL as a result. (I'm not aware of what specific deficiencies remain in MUSL, but given the history even when there are explicit standards, I am confident that there are more.) This wasn't related to dynamic linking though.
Glibc's NSS is mostly relevant for LANs. Which is a lot of corporate and home networks.
You have to bundle your own resolver into your application. But here's the rub, users expect your application to respect nsswitch which requires loading shared libs which execute arbitrary code. How Go handles this is somewhat awkward. They parse /etc/nsswitch and decide if they can cheat and use their own resolver based on what modules they see[1]. Otherwise they farm out to cgo to go through glibc.
[1] They're playing with fire here because you can't really assume to know for sure how the module 'dns' behaves. A user could replace the lib that backs it with their own that resolves everything to zombo.com. It would be one thing if nsswitch described behavior which was well defined and could be emulated but it doesn't, it specifies a specific implementation.
Fascinating. Thanks for breaking this down more. I think the article could've explained this point further.
The configuration of DNS resolution on Linux is quite complicated [1]. Musl just ignores all that. You can build a distro that works with musl, but a static musl binary dropped into an arbitrary Linux system won't necessarily work correctly.
The easy and conforming way to do that would be to call "getent hosts google.com" and use the answer. But this only works for simple use cases where you just need some IPv4/IPv6 address, you can't get other kinds of DNS records like MX or TLSA this way.