My own perf comparison: when I switched from Fil-C running on my system’s libc (recent glibc) for yololand to my own build of musl, I got a 1-2% perf regression. My best guess is that it’s because glibc’s memcpy/memmove/memset are better. Couldn’t have been the allocator since Fil-C’s runtime has its own allocator.
abnercoimbre · 44m ago
Interesting! Will you stick around with the musl build? And if so, why?
ObscureScience · 5h ago
That table is unfortunately quite old. I can't personally say what have changed, but it is hard to put much confidence in the relevance of the information.
thrtythreeforty · 3h ago
It really ought to lead with the license of each library. I was considering dietlibc until I got to the bottom - GPLv2. I am a GPL apologist and even I can appreciate that this is a nonstarter; even GNU's libc is only LGPL!
LeFantome · 2h ago
musl seems to have displaced dietLibc. Much more complete yet fairly small and light.
yusina · 17m ago
Note that dietlibc is the project of a sole coder in the CCC sphere from Berlin (Fefe). His main objective was to learn how low level infra is implemented and started using it in some of his other projects after realizing that there is a lot of bloat he can skip with just implementing the bare essentials. Musl has a different set of objectives.
moomin · 2h ago
No cosmopolitan, pity.
jay-barronville · 3h ago
Please note that the linked comparison table has been unmaintained for a while. This is even explicitly stated on the legacy musl libc website[0][0] (i.e., “The (mostly unmaintained) libc comparison is still available on etalabs.net.”).
> "I have tried to be fair and objective, but as I am the author of musl"
Yeah, pretty obvious when they state as much in the first paragraph.
snickerer · 5h ago
Fun libc comparison by the author of musl.
My getaway is: glibc is bloated but fast. Quite unexpected combination. Am I right?
kstrauser · 4h ago
It’s not shocking. More complex implementations using more sophisticated algorithms can be faster. That’s not always true, but it often is. For example, look at some of the string search algorithms used by things like ripgrep. They’re way more complex than just looping across the input and matching character by character, and they pay off.
Something like glibc has had decades to swap in complex, fast code for simple-looking functions.
weinzierl · 4h ago
In case of glibc I think what you said is orthogonal to its bloat. Yes, it has complex implementations but since they are for a good reason I'd hardly call them bloat.
Independently from that glibc implements a lot of stuff that could be considered bloat:
- Extensive internationalization support
- Extensive backward compatibility
- Support for numerous architectures and platforms
- Comprehensive implementations of optional standards
kstrauser · 4h ago
Ok, fair points, although internationalization seems like a reasonable thing to include at first glance.
Is there a fork of glibc that strips ancient or bizarre platforms?
SAI_Peregrinus · 1h ago
It's called glibc. Essentially all that "bloat" is conditionally compiled, if your target isn't an ancient or bizarre platform it won't get included in the runtime.
kstrauser · 1h ago
That’s mostly true, but not quite. For instance, suppose you aim to support all of 32/64-bit and little/big-endian. You’ll likely end up factoring straightforward math operations out into standalone functions. Granted, those will probably get inlined, but it may mean your structure is more abstracted than it would be otherwise. Just supporting the options has implications.
That’s not the strongest example. I just meant it to be illustrative of the idea.
dima55 · 2h ago
What problem are you trying to solve? glibc works just fine for most use cases. If you have some niche requirements, you have alternative libraries you can use (listed in the article). Forking glibc in the way you describe is literally pointless
kstrauser · 1h ago
Nothing really. I was just curious and this isn’t something I know much about, but would like to learn more of.
LeFantome · 2h ago
A lot of the “slowness” of MUSL is the default allocator. It can be swapped out.
For example, Chimera Linux uses MUSL with mimalloc and it is quite snappy.
cyberax · 18m ago
Not quite correct. glibc is slow if you need to be able to fork quickly.
However, it does have super-optimized string/memory functions. There are highly optimized assembly language implementations of them that use SIMD for dozens of different CPUs.
timeinput · 4h ago
My take away is that it's not a meaningful chart? Just in the first row musl looks bloated at 426k compared to dietlibc at 120k. Why were those colors chosen? It's arbitrary and up to the author of the chart.
The author of musl made a chart, that focused on the things they cared about and benchmarked them, and found that for the things they prioritized they were better than other standard library implementations (at least from counting green rows)? neat.
I mean I'm glad they made the library, that it's useful, and that it's meeting the goals they set out to solve, but what would the same chart created by the other library authors look like?
[0]: https://www.musl-libc.org
Yeah, pretty obvious when they state as much in the first paragraph.
My getaway is: glibc is bloated but fast. Quite unexpected combination. Am I right?
Something like glibc has had decades to swap in complex, fast code for simple-looking functions.
Independently from that glibc implements a lot of stuff that could be considered bloat:
- Extensive internationalization support
- Extensive backward compatibility
- Support for numerous architectures and platforms
- Comprehensive implementations of optional standards
Is there a fork of glibc that strips ancient or bizarre platforms?
That’s not the strongest example. I just meant it to be illustrative of the idea.
For example, Chimera Linux uses MUSL with mimalloc and it is quite snappy.
However, it does have super-optimized string/memory functions. There are highly optimized assembly language implementations of them that use SIMD for dozens of different CPUs.
The author of musl made a chart, that focused on the things they cared about and benchmarked them, and found that for the things they prioritized they were better than other standard library implementations (at least from counting green rows)? neat.
I mean I'm glad they made the library, that it's useful, and that it's meeting the goals they set out to solve, but what would the same chart created by the other library authors look like?