Not sure if that’s relevant, but when I do micro-benchmarks like that measuring time intervals way smaller than 1 second, I use __rdtsc() compiler intrinsic instead of standard library functions.
On all modern processors, that instruction measures wallclock time with a counter which increments at the base frequency of the CPU unaffected by dynamic frequency scaling.
Apart from the great resolution, that time measuring method has an upside of being very cheap, couple orders of magnitude faster than an OS kernel call.
sa46 · 8h ago
Isn't gettimeofday implemented with vDSO to avoid kernel context switching (and therefore, most of the overhead)?
My understanding is that using tsc directly is tricky. The rate might not be constant, and the rate differs across cores. [1]
I think most current systems have invariant tsc, I skimmed your article and was surprised to see an offset (but not totally shocked), but the rate looked the same.
You could cpu pin the thread that's reading the tsc, except you can't pin threads in OpenBSD :p
wahern · 6h ago
But just to be clear (for others), you don't need to do that because using RDTSC/RDTSCP is exactly how gettimeofday and clock_gettime work these days, even on OpenBSD. Where using the TSC is practical and reliable, the optimization is already there.
OpenBSD actually only implemented this optimization relatively recently. Though most TSCs will be invariant, they still need to be trained across cores, and there are other minutiae (sleeping states?) that made it a PITA to implement in a reliable way, and OpenBSD doesn't have as much manpower as Linux. Some of those non-obvious issues would be relevant to someone trying to do this manually, unless they could rely on their specific hardware behavior.
Dylan16807 · 7h ago
If you have something newer than a pentium 4 the rate will be constant.
I'm not sure of the details for when cores end up with different numbers.
quotemstr · 7h ago
Wizardly workarounds for broken APIs persist long after those APIs are fixed. People still avoid things like flock(2) because at one time NFS didn't handle file locking well. CLOCK_MONOTONIC_RAW is fine these days with the vDSO.
mananaysiempre · 4h ago
This does not account for frequency scaling on laptops, context switches, core migrations, time spent in syscalls (if you don’t want to count it), etc. On Linux, you can get the kernel to expose the real (non-“reference”) cycle counter for you to access with __rdpmc() (no syscall needed) and put the corrective offset in an memory-mapped page. See the example code under cap_user_rdpmc on the manpage for perf_event_open() [1] and NOTE WELL the -1 in rdpmc(idx-1) there (I definitely did not waste an hour on that).
If you want that on Windows, well, it’s possible, but you’re going to have to do it asynchronously from a different thread and also compute the offsets your own damn self[2].
Alternatively, on AMD processors only, starting with Zen 2, you can get the real cycle count with __aperf() or __rdpru(__RDPRU_APERF) or manual inline assembly depending on your compiler. (The official AMD docs will admonish you not to assign meaning to anything but the fraction APERF / MPERF in one place, but the conjunction of what they tell you in other places implies that MPERF must be the reference cycle count and APERF must be the real cycle count.) This is definitely less of a hassle, but in my experience the cap_user_rdpmc method on Linux is much less noisy.
> does not account for frequency scaling on laptops
Are you sure about that?
> time spent in syscalls (if you don’t want to count it)
The time spent in syscalls was the main objective the OP was measuring.
> cycle counter
While technically interesting, most of the time I do my micro-benchmark I only care about wallclock time. Contradictory to what you see in search engines and ChatGPT, RDTSC instruction is not a cycle counter, it’s a high resolution wallclock timer. That instruction was counting CPU cycles like 20 years ago, doesn’t do that anymore.
mananaysiempre · 58m ago
>> does not account for frequency scaling on laptops
> Are you sure about that?
> [...] RDTSC instruction is not a cycle counter, it’s a high resolution wallclock timer [...]
So we are in agreement here: with RDTSC you’re not counting cycles, you’re counting seconds. (That’s what I meant by “does not account for frequency scaling”.) I guess there are legitimate reasons to do that, but I’ve found organizing an experimental setup for wall-clock measurements to be excruciatingly difficult: getting 10–20% differences depending on whether your window is open or AC is on, or on how long the rebuild of the benchmark executable took, is not a good time. In a microbenchmark, I’d argue that makes RDTSC the wrong tool even if it’s technically usable with enough work. In other situations, it might be the only tool you have, and then sure, go ahead and use it.
> The time spent in syscalls was the main objective the OP was measuring.
I mean, of course I’m not covering TFA’s use case when I’m only speaking about Linux and Windows, but if you do want to include time in syscalls on Linux that’s also only a flag away. (With a caveat for shared resources—you’re still not counting time in kswapd or interrupt handlers, of course.)
MortyWaves · 46m ago
Fascinating how each “standard” or intrinsic that gets added actually totally fails to give you the real numbers promised.
mrlongroots · 3h ago
They could've just used `clock_gettime(CLOCK_MONOTONIC)`
junon · 3h ago
rdtsc isn't available on all platforms, for what it's worth. It's often disabled as there's a CPU flag to allow its use in user space, and it's well know to not be so accurate.
loeg · 2h ago
What platforms disable rdtsc for userspace? What accuracy issues do you think it has?
junon · 2h ago
rdtsc instruction access is gated by a permission bit. Sometimes it's allowed from userspace, sometimes it's not. There were issues with it in the past, I forget which off the top of my head.
It's also not as accurate as a the High Precision timer (HPET). I'm not sure which platforms gate/expose which these days but it's a grab bag.
loeg · 2h ago
Personally I'm not aware of any platform blocking rdtsc, so I was curious to learn which ones do.
A better title: a pathological test program meant for Linux does not trigger pathological behavior on OpenBSD
apgwoz · 8h ago
Surely you must be new to tedu posts…
ameliaquining · 8h ago
Still worth avoiding having the HN thread be about whether OpenBSD is in general faster than Linux. This is a thing I've seen a bunch of times recently, where someone gives an attention-grabbing headline to a post that's actually about a narrower and more interesting technical topic, but then in the comments everyone ignores the content and argues about the headline.
chasil · 3h ago
As I understand it, OpenBSD is similar to Linux 2.2 architecturally in that there is a lock that prevents (most) kernel code from running on more than one (logical) CPU at once.
We do hear that some kernel system calls have moved out from behind the lock over time, but anything requiring core kernel functionality must wait until the kernel is released from all other CPUs.
The kernel may be faster in this exercise, but is potentially a constrained resource on a highly loaded system.
This approach is more secure, however.
st_goliath · 6h ago
> This is a thing I've seen a bunch of times recently ...
> ... in the comments everyone ignores the content and argues about the headline.
Surely you must be new to Hacker News…
ameliaquining · 1h ago
It was more that there were a couple of particularly frustrating recent examples that happened to come to my attention. Of course this has always been a problem.
1vuio0pswjnm7 · 33m ago
"Usually it's the weirdo benchmark that shows OpenBSD being 10x slower, so this one is definitely going in the collection."
Interesting. I tried to follow the discussion in the linked thread, and the only takeaway I got was "something to do with RCU". What id the simplified explanation?
bobby_big_balls · 7h ago
In Linux, the file descriptor table (fdtable) of a process starts with a minimum of 256 slots. Two threads creating 256 sockets each, which uses 512 fds on top of the three already present (for stdin, stdout and stderr), requires that the fdtable be expanded about halfway through when the capacity is doubled from 256 to 512, and again near the end when resizing from 512 to 1024.
This is done by expand_fdtable() in the kernel. It contains the following code:
if (atomic_read(&files->count) > 1)
synchronize_rcu();
The field files->count is a reference counter. As there are two threads, which share a set of open files between them, the value of this is 2, meaning that synchronize_rcu() is called here during fdtable expansion. This waits until a full RCU grace period has elapsed, causing a delay in acquiring a new fd for the socket currently being created.
If the fdtable is expanded prior to creating a new thread, as the test program optionally will do by calling dup(0, 666) if supplied a command line argument, this avoids the synchronize_rcu() call because at this point files->count == 1. Therefore, if this is done, there will be no delay later on when creating all the sockets as the fdtable will have sufficient capacity.
By contrast, the OpenBSD kernel doesn't have anything like RCU and just uses a rwlock when the file descriptor table of the process is being modified, avoiding the long delay during expansion that may be observed in Linux.
tptacek · 7h ago
RCUs are super interesting; here's (I think I've got the right link) a good talk on how they work and why they work that way:
Thanks for the explanation. I confirmed the performance timing different by enabling the dup call.
I guess my question is why would synchronize_rcu take many milliseconds (20+) to run. I would expect that to be in the very low milliseconds or less.
altairprime · 6h ago
> allocating kernel objects from proper virtual memory makes this easier. Linux currently just allocates kernel objects straight out of the linear mapping of all physical memory
I found this to be a key takeaway of reading the full thread: this is, in part, a benchmark of kernel memory allocation approaches, that surfaces an unforeseen difference in FD performance at a mere 256 x 2 allocs. Presumably we’re seeing a test case distilled down from a real world scenario where this slowdown was traced for some reason?
saagarjha · 6h ago
That’s how they’re designed; they are intended to complete at some point that’s not soon. There’s an “expedited RCU” which to my understanding tries to get everyone past the barrier as fast as possible by yelling at them but I don’t know if that would be appropriate here.
viraptor · 8h ago
When 2 threads are allocating sockets sequentially, they fight for the locks. If you preallocate a bigger table by creating fd 666 first, the lock contention goes away.
JdeBP · 6h ago
It's something that has always been interesting about Windows NT, which has a multi-level object handle table, and does not have the rule about re-using the lowest numbered available table index. There's scope for reducing contention amongst threads in such an architecture.
Although:
1. back in application-mode code the language runtime libraries make things look like a POSIX API and maintain their own table mapping object handles to POSIX-like file descriptors, where there is the old contention over the lowest free entries; and
1. in practice the object handle table seems to mostly append, so multiple object-opening threads all contend over the end of the table.
saagarjha · 6h ago
RCU is very explicitly a lockless synchronization strategy.
loeg · 2h ago
This isn't why it's slow. 2x256 just isn't a lot of locks in CPU time.
ginko · 1h ago
The problem with locking isn’t the overhead of the locking mechanism itself but the necessary serialization.
loeg · 1h ago
Again, mutex contention / serialization is not why this is slow.
rurban · 10h ago
No, generally Linux is at least 3x faster than OpenBSD, because they don't care much for optimizations.
farhaven · 8h ago
OpenBSD is a lot faster in some specialized areas though. Random number generation from `/dev/urandom`, for example. When I was at university (in 2010 or so), it was faster to read `/dev/urandom` on my OpenBSD laptop and pipe it over ethernet to a friend's Linux laptop than running `cat /dev/urandom > /dev/sda` directly on his.
Not by just a bit, but it was a difference between 10MB/s and 100MB/s.
sillystuff · 8h ago
I think you meant to say /dev/random, not /dev/urandom.
/dev/random, on linux used to stall waiting for entropy from sources of randomness like network jitter, mouse movement, keyboard typing. /dev/urandom has always been fast on Linux.
Today, linux /dev/random mainly uses an RNG after initial seeding. The BSDs always did this. On my laptop, I get over 500MB/s (kernel 6.12) .
IIRC, on modern linux kernels, /dev/urandom is now just an alias to /dev/random for backward compatibility.
tptacek · 7h ago
There's no reason for normal userland code not part of the distribution itself ever to use /dev/random, and getrandom(2) with GRND_RANDOM unset is probably the right answer for everything.
Both Linux and BSD use a CSPRNG to satisfy /dev/{urandom,random} and getrandom, and, for future-secrecy/compromise-protection continually update their entropy pools with hashed high-entropy events (there's ~essentially no practical cryptographic reason a "seeded" CSPRNG ever needs to be rekeyed, but there are practical systems security reasons to do it).
sgarland · 6h ago
OpenBSD switched their PRNG to arc4random in 2012 (and then ChaCha20 in 2014); depending on how accurate your time estimate is, that could well have been the cause. Linux switched to ChaCha20 in 2016.
Related, I stumbled down a rabbit hole of PRNGs last year when I discovered [0] that my Mac was way faster at generating UUIDs than my Linux server, even taking architecture and clock speed into account. Turns out glibc didn’t get arc4random until 2.36, and the version of Debian I had at the time didn’t have 2.36. In contrast, since MacOS is BSD-based, it’s had it for quite some time.
/dev/urandom isn't a great test, IMO, simply because there are reasonable tradeoffs in security v speed.
For all I know BSD could be doing 31*last or something similar.
The algorithm is also free to change.
chowells · 7h ago
Um... This conversation is about OpenBSD, making that objection incredibly funny. OpenBSD has a mostly-deserved reputation for doing the correct security thing first, in all cases.
But that's also why the rng stuff was so much faster. There was a long period of time where the Linux dev in charge of randomness believed a lot of voodoo instead of actual security practices, and chose nonsense slow systems instead of well-researched fast ones. Linux has finally moved into the modern era, but there was a long period where the randomness features were far inferior to systems built by people with a security background.
tptacek · 7h ago
OpenBSD isn't meaningfully more secure than Linux. It probably was 20 years ago. Today it's more accurate to say that Linux and OpenBSD have pursued different security strategies --- there are meaningful differences, but they aren't on a simple one-dimensional spectrum of "good" to "bad".
(I was involved, somewhat peripherally, in OpenBSD security during the era of the big OpenBSD Security Audit).
sugarpimpdorsey · 5h ago
Haven't they had some embarrassing RCEs in the not too distant past? It kind of calls into question the significance of that claim about holes "in the default install" - even Windows ships without any services exposed these days.
Ultimately, they suffer from a lack of developer resources.
Which is a shame because it's a wonderfully integrated system (as opposed to the tattered quilt that is every Linux distro). But I suspect it's the project leadership that keeps more people away.
user3939382 · 2h ago
I’ve found the OpenBSD community to have a bad/snobbish attitude which could just be a coincidence, no idea. I’ve always liked NetBSD which I never had that problem with.
extraisland · 24m ago
My experience is that they expect you to read the docs and ask smart questions. Most of everything is in the documentations, READMEs etc.
somat · 7h ago
At one point probably 10 years ago I had linux vm guests refuse to generate gpg keys, gpg insisted it needed the stupid blocking random device, and because the vm guest was not getting any "entropy" the process went nowhere. As an openbsd user naturally I was disgusted, there are many sane solutions to this problem, but I used none of them. Instead I found rngd a service to accept "entropy" from a network source and blasted it with the /dev/random from a fresh obsd guest on the same vm host. Mainly out of spite. "look here you little shit, this is how you generate random numbers"
craftkiller · 4h ago
Qemu added support for VirtIO RNG in 2012 [0] so depending on how accurate that 10 year figure is, you also could have used that to make your VM able to use the host system's entropy.
Rather they care about the security. The same mitigations in Linux would likely make it even slower.
themafia · 8h ago
Yea, well, I had to modify your website to make it readable. Why do people do this?
opan · 4h ago
It looks good to me on mobile. High contrast, lines aren't too long. What issue did you have?
themafia · 4h ago
There are two widgets in the lower left and lower right corners of the page which constantly shoot little bullets all over the screen chasing your mouse pointer. In the steady state on my screen there are about 20 sprites constantly moving across the page. There was no obvious way to disable them other than inspecting the page and deleting the widgets from the page.
If you want me to read your site, and you want to put detailed technical information on it, please don't add things like this without an off switch.
SoftTalker · 1h ago
That was added fairly recently I’d guess as a joke but I’m not in on the reasons.
antennafirepla · 3h ago
Very frustrating, I read with the mouse pointer and this made it impossible.
sugarpimpdorsey · 6h ago
OpenBSD is many things, but 'fast' is not a word that comes to mind.
Lightweight? Yes.
Minimalist? Definitely.
Compact? Sure.
But fast? No.
Would I host a database or fileserver on OpenBSD? Hell no.
Boot times seem to take as long as they did 20 years ago. They are also advocates for every schizo security mitigation they can dream up that sacrifices speed and that's ok too.
But let's not pretend it's something it's not.
ThinkBeat · 5h ago
You must run a different branch of OpenBSD than I.
chasil · 3h ago
In some defense of the parent post, a new kernel is relinked at every boot. This load is noticeable.
This is aslr on steroids, and it does vastly increase kernel attack complexity, but it is a computational and I/O load that no version of Linux imposes that I know.
Relinking the C library is relatively quick in comparison.
sillywalk · 43m ago
> a new kernel is relinked at every boot
Known as OpenBSD kernel address randomized link (KARL)[0][1]
Also, libc, and libcrypto are re-linked at boot [2].
By leaving my finger on the screen, I accidentally triggered an easter egg of two "cannons" shooting squares. Did anyone else notice it?
IFC_LLC · 4h ago
I'll be honest, this was the first time when I was unhappy with an easter egg and was unable to finish reading the article because of it.
Triggered on a safari on mac.
evanjrowley · 7h ago
I also saw it, and it happened on a non-touch computer screen.
pan69 · 7h ago
Happened for me on my normal desktop browser, cute but distracting. It also made my mouse cursor disappear. I had to move my mouse outside the browser window to make it visible again.
agambrahma · 8h ago
So ... essentially testing file descriptor allocation overhead
loeg · 1h ago
Sort of. Fd table size, which is slightly different than fds (once you reach the ulimit, there's no need to resize it larger); and only in multithreaded programs.
eyberg · 3h ago
This is kind of a stupid "benchmark" but if we're going to walk down this road:
linux: elapsed: 0.019895s
nanos (running on said linux): elapsed: 0.000886s
haunter · 8h ago
In my
mind faster = the same game with the same graphics settings have more FPS
(I don’t even know you could actually start mainstream games on BSD or not)
nine_k · 8h ago
Isn't it mostly limited by GPU hardware, and by binary blobs that are largely independent from the host platform?
haunter · 8h ago
Games run better under Linux (even if they are not-native but with Proton/Wine)
than on Windows 11 so the platform does matter
It annoys me when people claim this. It depends on the game, distro, proton version, what desktop environment, plus a lot of other things I have forgotten about.
Also latency is frequently worse on Linux. I play a lot of quick twitch games on Linux and Windows and while fps and frame times are generally in the same ballpark, latency is far higher.
Other problems is that proton compatibility is all over the place. Some of the games valve said were certified don't actually work well, mods can be problematic, and generally you end up faffing with custom launch options to get things working well.
zelphirkalt · 8h ago
Many of those games mysteriously fail to work for me, almost like Proton has a problem on my system in general and I am unable to figure it out. However, in the past I got games that are made for Windows to work better on WINE than on Windows. One of those games is Starcraft 2 when it came out. On Windows it would always freeze in one movie/sequence of the single player campaign, which made it actually unplayable on Windows, while after some trial and error, I managed to get a fully working game on GNU/Linux, and was able to finish the campaign.
This goes to show, that the experience with Proton and different hardware and whatever it is in system configuration is highly individual, but also, that games can indeed run better using WINE or Proton than on the system they were made for.
extraisland · 7h ago
Consistency is better than any theoretical FPS improvements IMO.
Often for games that don't work with modern Windows there are fan patches/mods that fix these issues.
For games that are modern frequently have weird framerate issues that rarely happen on Windows. When I am playing a multiplayer, fast twitch game I don't want the framerate to randomly dip.
I was gaming exclusively on Linux from 2019 and gave up earlier this year. I wanted to play Red Alert 2 and trying to work out what to with Wine and all the other stuff was a PITA. It was all easy on Windows.
My guess is it has something to do with the file descriptor table having a lot of empty entries (the dup2(0, 666) line.)
Now time to read the actual linked discussion.
wahern · 6h ago
I think dup2 is the hint, but in the example case the dup2 path isn't invoked--it's conditioned on passing an argument, but the test runs are just `./a.out`. IIUC, the issue is growing the file descriptor table. The dup2 is a workaround that preallocates a larger table (666 > 256 * 2)[1], to avoid the pathological case when a multi-threaded process grows the table. From the linked infosec.exchange discussion it seems the RCU-based approach Linux is using can result in some significant latency, resulting in much worse performance in simple cases like this compared to a simple mutex[2].
[1] Off-by-one. To be more precise, the state established by the dup2 is (667 > 256 * 2), or rather (667 > 3 + 256 * 2).
[2] Presumably what OpenBSD is using. I'd be surprised if they've already imported and adopted FreeBSD's approach mentioned in the linked discussion, notwithstanding that OpenBSD has been on an MP scalability tear the past few years.
jedberg · 9h ago
"It depends"
Faster is all relative. What are you doing? Is it networking? Then BSD is probably faster than Linux. Is it something Linux is optimized for? Then probably Linux.
A general benchmark? Who knows, but does it really matter?
At the end of the day, you should benchmark your own workload, but also it's important to realize that in this day and age, it's almost never the OS that is the bottleneck. It's almost always a remote network call.
loeg · 1h ago
The article is marginally more interesting than the headline, and it's not very long. Go ahead and read it.
M_r_R_o_b_o_t_ · 7h ago
Ye
znpy · 8h ago
the first step in benchmarking software is to use the same hardware.
the author failed the first step.
everything that follows is then garbage.
saagarjha · 6h ago
You do understand that people who know how to benchmark things don’t actually need to conform to the rules of thumb that are given to non-experts so they don’t shoot themselves in the foot, right? Do you also write off rally drivers because they have their feet on both pedals?
the_plus_one · 8h ago
Is it just me, or is there some kind of asteroid game shooting bullets at my cursor while I try to read this [1]? I hate to sound mean, but it's a bit distracting. I guess it's my fault for having JavaScript enabled.
It's extremely distracting. I'm not normally one to have issues that require reduced motion, but the asteroids are almost distracting enough on their own, and the fact that it causes my cursor to vanish is a real accessibility issue. I didn't actually realize just how much I use my mouse cursor when reading stuff until now, partly as a fidget, partly as a controllable visual anchor as my eyes scan the page.
joemi · 6h ago
I actually can't read things on that site at all. I move my mouse around while reading, not necessarily near the words I'm currently reading, so when my mouse disappears it's haltingly distracting. In addition to that, the way the "game" visually interferes with the text that I'm trying to read makes it incredibly hard to focus on reading. These two things combine to make this site literally unreadable for me.
I don't get why people keep posting and upvoting articles from this user-hostile site.
binarycrusader · 6h ago
I found it exceedingly difficult to read, so I ended up applying these ublock filter rules so I could read it:
Please don't complain about tangential annoyances—e.g. article or website formats, name collisions, or back-button breakage. They're too common to be interesting.
Because way more people have opinions about e.g. asteroid game scripts on web pages than have opinions on RCUs, these subthreads spread like kudzu.
ummonk · 6h ago
The described behavior sounds like significantly worse than tangential annoyance, and isn’t really a common occurrence even on modern user-hostile websites.
nomel · 8h ago
And, if it hits, your cursor disappears! I wish there was some explosion.
Jtsummers · 5h ago
He used to have a loading screen that did nothing if you have JS enabled in your browser, but no loading screen (which, again, did nothing) if you had JS disabled. I'm pretty sure it's meant to deliberately annoy, though this one is less annoying than the loading screen was.
bigstrat2003 · 8h ago
No, it's the website's fault for doing stupid cutesy stuff that makes the page harder to read. Don't victim-blame yourself here.
stavros · 7h ago
I really don't understand this "everything must be 100% serious all the time". Why is it stupid?
forbiddenlake · 4h ago
I didn't read the article because all the moving bits were too distracting. Something also turned my cursor invisible, which is rude.
Not sure anyone lost anything here, or anyone cares.
apodik · 7h ago
I generally think stuff like that make the web much more interesting.
In this case it was distracting though.
Gualdrapo · 7h ago
The HN hivemind decries the lack of humanity and personality of the internet of nowadays but at the same time wants every website to be 100% text, no JS, no CSS because allegedly nobody needs CSS and, if you dare to do something remotely "fancy" with the layout, you have to build it with <table>s.
jackyard86 · 2h ago
This is not about "lack of humanity", but about violating fundamental UX rules such as hiding your cursor at random times. It's offensive.
You don't have to sacrifice usability while expressing personality.
extraisland · 12m ago
I barely noticed it. The complaints I've read about it are making a mountain out of a molehill.
nomel · 4h ago
You should ask for a refund!
q3k · 7h ago
god forbid people have fun on the internet
uwagar · 6h ago
the bsd people seem to enjoy measuring and logging a lot.
On all modern processors, that instruction measures wallclock time with a counter which increments at the base frequency of the CPU unaffected by dynamic frequency scaling.
Apart from the great resolution, that time measuring method has an upside of being very cheap, couple orders of magnitude faster than an OS kernel call.
My understanding is that using tsc directly is tricky. The rate might not be constant, and the rate differs across cores. [1]
[1]: https://www.pingcap.com/blog/how-we-trace-a-kv-database-with...
You could cpu pin the thread that's reading the tsc, except you can't pin threads in OpenBSD :p
OpenBSD actually only implemented this optimization relatively recently. Though most TSCs will be invariant, they still need to be trained across cores, and there are other minutiae (sleeping states?) that made it a PITA to implement in a reliable way, and OpenBSD doesn't have as much manpower as Linux. Some of those non-obvious issues would be relevant to someone trying to do this manually, unless they could rely on their specific hardware behavior.
I'm not sure of the details for when cores end up with different numbers.
If you want that on Windows, well, it’s possible, but you’re going to have to do it asynchronously from a different thread and also compute the offsets your own damn self[2].
Alternatively, on AMD processors only, starting with Zen 2, you can get the real cycle count with __aperf() or __rdpru(__RDPRU_APERF) or manual inline assembly depending on your compiler. (The official AMD docs will admonish you not to assign meaning to anything but the fraction APERF / MPERF in one place, but the conjunction of what they tell you in other places implies that MPERF must be the reference cycle count and APERF must be the real cycle count.) This is definitely less of a hassle, but in my experience the cap_user_rdpmc method on Linux is much less noisy.
[1] https://man7.org/linux/man-pages/man2/perf_event_open.2.html
[2] https://www.computerenhance.com/p/halloween-spooktacular-day...
Are you sure about that?
> time spent in syscalls (if you don’t want to count it)
The time spent in syscalls was the main objective the OP was measuring.
> cycle counter
While technically interesting, most of the time I do my micro-benchmark I only care about wallclock time. Contradictory to what you see in search engines and ChatGPT, RDTSC instruction is not a cycle counter, it’s a high resolution wallclock timer. That instruction was counting CPU cycles like 20 years ago, doesn’t do that anymore.
> Are you sure about that?
> [...] RDTSC instruction is not a cycle counter, it’s a high resolution wallclock timer [...]
So we are in agreement here: with RDTSC you’re not counting cycles, you’re counting seconds. (That’s what I meant by “does not account for frequency scaling”.) I guess there are legitimate reasons to do that, but I’ve found organizing an experimental setup for wall-clock measurements to be excruciatingly difficult: getting 10–20% differences depending on whether your window is open or AC is on, or on how long the rebuild of the benchmark executable took, is not a good time. In a microbenchmark, I’d argue that makes RDTSC the wrong tool even if it’s technically usable with enough work. In other situations, it might be the only tool you have, and then sure, go ahead and use it.
> The time spent in syscalls was the main objective the OP was measuring.
I mean, of course I’m not covering TFA’s use case when I’m only speaking about Linux and Windows, but if you do want to include time in syscalls on Linux that’s also only a flag away. (With a caveat for shared resources—you’re still not counting time in kswapd or interrupt handlers, of course.)
It's also not as accurate as a the High Precision timer (HPET). I'm not sure which platforms gate/expose which these days but it's a grab bag.
What do you do on ARM?
We do hear that some kernel system calls have moved out from behind the lock over time, but anything requiring core kernel functionality must wait until the kernel is released from all other CPUs.
The kernel may be faster in this exercise, but is potentially a constrained resource on a highly loaded system.
This approach is more secure, however.
> ... in the comments everyone ignores the content and argues about the headline.
Surely you must be new to Hacker News…
Could it be
https://web.archive.org/web/20031020054211if_/http://bulk.fe...
or is there another one
This is done by expand_fdtable() in the kernel. It contains the following code:
The field files->count is a reference counter. As there are two threads, which share a set of open files between them, the value of this is 2, meaning that synchronize_rcu() is called here during fdtable expansion. This waits until a full RCU grace period has elapsed, causing a delay in acquiring a new fd for the socket currently being created.If the fdtable is expanded prior to creating a new thread, as the test program optionally will do by calling dup(0, 666) if supplied a command line argument, this avoids the synchronize_rcu() call because at this point files->count == 1. Therefore, if this is done, there will be no delay later on when creating all the sockets as the fdtable will have sufficient capacity.
By contrast, the OpenBSD kernel doesn't have anything like RCU and just uses a rwlock when the file descriptor table of the process is being modified, avoiding the long delay during expansion that may be observed in Linux.
https://www.youtube.com/watch?v=9rNVyyPjoC4
Context: https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...
I guess my question is why would synchronize_rcu take many milliseconds (20+) to run. I would expect that to be in the very low milliseconds or less.
I found this to be a key takeaway of reading the full thread: this is, in part, a benchmark of kernel memory allocation approaches, that surfaces an unforeseen difference in FD performance at a mere 256 x 2 allocs. Presumably we’re seeing a test case distilled down from a real world scenario where this slowdown was traced for some reason?
Although:
1. back in application-mode code the language runtime libraries make things look like a POSIX API and maintain their own table mapping object handles to POSIX-like file descriptors, where there is the old contention over the lowest free entries; and
1. in practice the object handle table seems to mostly append, so multiple object-opening threads all contend over the end of the table.
Not by just a bit, but it was a difference between 10MB/s and 100MB/s.
/dev/random, on linux used to stall waiting for entropy from sources of randomness like network jitter, mouse movement, keyboard typing. /dev/urandom has always been fast on Linux.
Today, linux /dev/random mainly uses an RNG after initial seeding. The BSDs always did this. On my laptop, I get over 500MB/s (kernel 6.12) .
IIRC, on modern linux kernels, /dev/urandom is now just an alias to /dev/random for backward compatibility.
Both Linux and BSD use a CSPRNG to satisfy /dev/{urandom,random} and getrandom, and, for future-secrecy/compromise-protection continually update their entropy pools with hashed high-entropy events (there's ~essentially no practical cryptographic reason a "seeded" CSPRNG ever needs to be rekeyed, but there are practical systems security reasons to do it).
Related, I stumbled down a rabbit hole of PRNGs last year when I discovered [0] that my Mac was way faster at generating UUIDs than my Linux server, even taking architecture and clock speed into account. Turns out glibc didn’t get arc4random until 2.36, and the version of Debian I had at the time didn’t have 2.36. In contrast, since MacOS is BSD-based, it’s had it for quite some time.
[0]: https://gist.github.com/stephanGarland/f6b7a13585c0caf9eb64b...
No comments yet
For all I know BSD could be doing 31*last or something similar.
The algorithm is also free to change.
But that's also why the rng stuff was so much faster. There was a long period of time where the Linux dev in charge of randomness believed a lot of voodoo instead of actual security practices, and chose nonsense slow systems instead of well-researched fast ones. Linux has finally moved into the modern era, but there was a long period where the randomness features were far inferior to systems built by people with a security background.
(I was involved, somewhat peripherally, in OpenBSD security during the era of the big OpenBSD Security Audit).
Ultimately, they suffer from a lack of developer resources.
Which is a shame because it's a wonderfully integrated system (as opposed to the tattered quilt that is every Linux distro). But I suspect it's the project leadership that keeps more people away.
[0] https://wiki.qemu.org/Features/VirtIORNG
If you want me to read your site, and you want to put detailed technical information on it, please don't add things like this without an off switch.
Lightweight? Yes.
Minimalist? Definitely.
Compact? Sure.
But fast? No.
Would I host a database or fileserver on OpenBSD? Hell no.
Boot times seem to take as long as they did 20 years ago. They are also advocates for every schizo security mitigation they can dream up that sacrifices speed and that's ok too.
But let's not pretend it's something it's not.
This is aslr on steroids, and it does vastly increase kernel attack complexity, but it is a computational and I/O load that no version of Linux imposes that I know.
Relinking the C library is relatively quick in comparison.
Known as OpenBSD kernel address randomized link (KARL)[0][1]
Also, libc, and libcrypto are re-linked at boot [2].
And sshd [3].
[0] https://marc.info/?l=openbsd-tech&m=149732026405941
[1] https://news.ycombinator.com/item?id=14709256
[2] https://news.ycombinator.com/item?id=14710180
[3] https://marc.info/?l=openbsd-cvs&m=167407459325339&w=2
Triggered on a safari on mac.
linux: elapsed: 0.019895s
nanos (running on said linux): elapsed: 0.000886s
(I don’t even know you could actually start mainstream games on BSD or not)
https://news.ycombinator.com/item?id=44381144
Also latency is frequently worse on Linux. I play a lot of quick twitch games on Linux and Windows and while fps and frame times are generally in the same ballpark, latency is far higher.
Other problems is that proton compatibility is all over the place. Some of the games valve said were certified don't actually work well, mods can be problematic, and generally you end up faffing with custom launch options to get things working well.
This goes to show, that the experience with Proton and different hardware and whatever it is in system configuration is highly individual, but also, that games can indeed run better using WINE or Proton than on the system they were made for.
Often for games that don't work with modern Windows there are fan patches/mods that fix these issues.
For games that are modern frequently have weird framerate issues that rarely happen on Windows. When I am playing a multiplayer, fast twitch game I don't want the framerate to randomly dip.
I was gaming exclusively on Linux from 2019 and gave up earlier this year. I wanted to play Red Alert 2 and trying to work out what to with Wine and all the other stuff was a PITA. It was all easy on Windows.
Now time to read the actual linked discussion.
[1] Off-by-one. To be more precise, the state established by the dup2 is (667 > 256 * 2), or rather (667 > 3 + 256 * 2).
[2] Presumably what OpenBSD is using. I'd be surprised if they've already imported and adopted FreeBSD's approach mentioned in the linked discussion, notwithstanding that OpenBSD has been on an MP scalability tear the past few years.
Faster is all relative. What are you doing? Is it networking? Then BSD is probably faster than Linux. Is it something Linux is optimized for? Then probably Linux.
A general benchmark? Who knows, but does it really matter?
At the end of the day, you should benchmark your own workload, but also it's important to realize that in this day and age, it's almost never the OS that is the bottleneck. It's almost always a remote network call.
the author failed the first step.
everything that follows is then garbage.
[1]: https://flak.tedunangst.com/script.js
I don't get why people keep posting and upvoting articles from this user-hostile site.
https://news.ycombinator.com/newsguidelines.html
Because way more people have opinions about e.g. asteroid game scripts on web pages than have opinions on RCUs, these subthreads spread like kudzu.
Not sure anyone lost anything here, or anyone cares.
In this case it was distracting though.
You don't have to sacrifice usability while expressing personality.