Fun post! An alternative to using futexes to store thread queues in kernel space is to store them yourself. E.g. the parking_lot[0] Rust crate, inspired by WebKit[1], uses only one byte to store the unlocked/locked/locked_contended state, and under contention uses the address of the byte to index into a global open-addressing hash table of thread queues. You look up the object's entry, lock said entry, add the thread to the queue, unlock it, and go to sleep. Because you know that there is at most one entry per thread, you can keep the load factor very low in order to keep the mutex fast and form the thread queue out of a linked list of thread-locals. Leaking the old hash on resizing helps make resizing safe.
As a result, uncontended locks work the same as described in the blog post above; under contention, performance is similar to a futex too. But now your locks are only one byte in size, regardless of platform – while Windows allows 1-byte futexes, they're always 4 bytes on Linux and iirc Darwin doesn't quite have an equivalent api (but I might be wrong there). You also have more control over parked threads if you want to implement different fairness criteria, reliable timeouts or parking callbacks.
One drawback of this is that you can only easily use this within one process, while at least on Linux futexes can be shared between processes.
I've written a blog post[2] about using futexes to implement monitors (reëntrant mutexes with an associated condvar) in a compact way for my toy Java Virtual Machine, though I've since switched to a parking-lot-like approach.
That's not a very useful property, though. Because inter-core memory works on cache-line granularities, packing more than one lock in a cache line is a Bad Idea™. Potentially it allows you to pack more data being protected by a lock with that data... but alignment rules means that you're going to invariably end up spending 4 or 8 bytes (via a regular integer or a pointer) on that lock anyways.
vlovich123 · 1d ago
In rust the compiler will auto-pack everything so your 1 byte mutex would be placed after any multibyte data to avoid padding.
scottlamb · 1d ago
That's typically not true due to the `Mutex<T>` design: the `T` gets padded to its alignment, then placed into the `struct Mutex` along with the signaling byte, and that struct is padded again before being put into the outer struct.
You can avoid this with a `parking_lot::Mutex<()>` or `parking_lot::RawMutex` guarding other contents, but then you need to use `unsafe` because the borrow checker doesn't understand what you're doing.
You could use CAS loops throughout to make your locks "less than one byte" in size, i.e. one byte, or perhaps one machine word, but using the free bits in that byte/word to store arbitrary data. (This is because a CAS loop can implement any read-modify-write operation on atomically sized data. But CAS will be somewhat slower than special-cased hardware atomics, so this is a bad idea for locks that are performance-sensitive.)
gpderetta · 1d ago
Single bit spin locks to protect things like linked list nodes are not unheard of.
gpderetta · 1d ago
Enough to be able to pack a mutex and a pointer together for example. If you are carefully packing your structs a one byte mutex is great.
skitter · 1d ago
Yup, that's what I'm doing - storing the two bits needed for an object's monitor in the same word as its compressed class pointer. The pointer doesn't change over the lock's lifetime.
mandarax8 · 22h ago
But you can embed this 1 byte lock into other bigger objects (eg. high bytes of a pointer).
With 4 byte locks your run into the exact same false sharing issues.
gmokki · 1d ago
Doesn't the futex2 syscall allow 1 byte futexes on recent kernel?
But the small futex2 patch will not go forward until some users say they want/need the feature
gpderetta · 1d ago
> don’t wake this thread up while the value is still X
That's the wrong way to think about FUTEX_WAIT. What it does is "put this thread to sleep unless the value is not X".
> If you call futex wait and the value is unchanged, the sleeping thread will not wake up!
[I assume this was meant to be FUTEX_WAKE] I can't be bothered to check the kernel source or to test, but I would be surprised if this is true as it might cause missed wakeups in an ABA scenario. Futex_wake must wake up at least one successful futex_wait that happens-before the wake. Futexes are best understood as edge triggered, stateless (outside of the wait list itself) primitives, so the value at the futex location (as opposed to its address) is not really important[1], except as a guard to avoid missed wakeups.
Unfortunately the name itself (Fast Userspace Mutex) is a bit misleading, because a mutex is only one of the many things you can do with a futex. They really are a generalized waiting and signaling primitive.
[1] for plain WAIT and WAKE at least, the bitset operations or the robust futex operations are more complex and attach semantics to the value.
tialaramex · 1d ago
> I would be surprised if this is true as it might cause missed wakeups in an ABA scenario.
More importantly, what could "unchanged" even mean? For FUTEX_WAIT we provide val, a value we're saying is the value stored at the futex address, and the kernel can check that's true. But for FUTEX_WAIT val is filled out with a count - typically 1 meaning "Only wake one" or its maximum positive value meaning "everybody" although in principle if you can find a reason to wake up to 7 waiters but no more that's allowed.
kentonv · 1d ago
Came here because I had the same reaction but also can't be bothered to test so was hoping someone else did.
Guess we'll just never know for sure, lol.
scottlamb · 23h ago
Related: did the idea of rseq `RSEQ_SCHED_STATE_FLAG_ON_CPU` for adaptive userspace mutexes [1] ever come to anything? I think there are a lot of userspace lock implementations using adaptive mutexes (including say `absl::Mutex` in C++ and `parking_lot::Mutex` in Rust). This seemed promising as a better way to decide when to switch from spinning to blocking.
Darwin has its own set of futex primitives that it only fairly recently made public API, see https://developer.apple.com/documentation/os/os_sync_wait_on.... But there is a problem with this approach on Darwin, which is that the Darwin kernel has a Quality of Service thread priority implementation that differs from other kernels such that mutexes implemented with spinlocks or with primitives like this are vulnerable to priority inversion. Priority inversion is of course possible on other platforms, but other kernels typically guarantee even low-priority threads always eventually get serviced, whereas on Darwin a low-QoS thread will only get serviced if there are no higher-QoS threads that want to run.
For this reason, on Darwin if you want a mutex of the sort this article describes, you'll typically want to reach for os_unfair_lock, as that will donate the priority of the waiting threads to the thread that holds the lock, thus avoiding the priority inversion issue.
gpderetta · 21h ago
in principle you would have the same issue with POSIX realtime scheduling (i.e. SCHED_FIFO, SCHED_RR), but these days by default linux will still reserve 5% of cpu time for non RT threads. This can be disabled though.
geertj · 1d ago
The annoying thing about locks (at least the variant that waits) is not just that you have to enter the kernel and wait when the lock is not available (fair enough), but also that the current holder will have to wake you, which requires another dip into the kernel by the holder.
I have been thinking on and off on how to create a syscall-less wake operation. One way to get almost what you want is to have a polling io_uring. That still requires one kernel thread that busy polls per application. Maybe this is fine in some application architectures but it's not ideal.
It would be nice if there was a way to use Intel's debug registers to write a value to some address, which would then interrupt some kernel task, allowing that kernel task to somehow figure out what futex to wake, without the interrupter having to enter the kernel.
nrds · 3h ago
> It would be nice if there was a way to use Intel's debug registers to write a value to some address, which would then interrupt some kernel task
What you have described is literally the syscall mechanism. That's what it is. You perform some register write (via a specific instruction) and an interrupt is taken to the kernel.
Maybe you believe that an asynchronous interrupt would cost less than a synchronous interrupt for this particular objective but I'm not sure there's evidence for that claim.
zozbot234 · 1d ago
The point of locks 'waiting' is really just that they degrade nicely under heavy contention, e.g. when more threads are trying to take the lock than you have available cores/harts. Busy polling will lead to terrible performance in such conditions, whereas threads that "wait" will do the right thing and leave CPU resources free for the active tasks to progress.
geertj · 1d ago
I mentioned busy polling as a means to an end, with the end being the ability to wake a thread without requiring a system call (ideally without busy polling!).
gpderetta · 1d ago
>It would be nice if there was a way to use Intel's debug registers to write a value to some address, which would then interrupt some kernel task
Apparently Intel cpus were supposed to get user space interrupts which would do exactly this. I'm not sure of hardware was ever shipped with support though.
As a result, uncontended locks work the same as described in the blog post above; under contention, performance is similar to a futex too. But now your locks are only one byte in size, regardless of platform – while Windows allows 1-byte futexes, they're always 4 bytes on Linux and iirc Darwin doesn't quite have an equivalent api (but I might be wrong there). You also have more control over parked threads if you want to implement different fairness criteria, reliable timeouts or parking callbacks.
One drawback of this is that you can only easily use this within one process, while at least on Linux futexes can be shared between processes.
I've written a blog post[2] about using futexes to implement monitors (reëntrant mutexes with an associated condvar) in a compact way for my toy Java Virtual Machine, though I've since switched to a parking-lot-like approach.
[0]: https://github.com/amanieu/parking_lot [1]: https://webkit.org/blog/6161/locking-in-webkit [2]: https://specificprotagonist.net/jvm-futex.html
That's not a very useful property, though. Because inter-core memory works on cache-line granularities, packing more than one lock in a cache line is a Bad Idea™. Potentially it allows you to pack more data being protected by a lock with that data... but alignment rules means that you're going to invariably end up spending 4 or 8 bytes (via a regular integer or a pointer) on that lock anyways.
You can avoid this with a `parking_lot::Mutex<()>` or `parking_lot::RawMutex` guarding other contents, but then you need to use `unsafe` because the borrow checker doesn't understand what you're doing.
I coincidentally was discussing this elsewhere recently: https://www.reddit.com/r/rust/comments/1ky5gva/comment/mv3kp...
No comments yet
With 4 byte locks your run into the exact same false sharing issues.
Double checks. Nope. The api is there and the patch to implement them has been posted multiple times: https://lore.kernel.org/lkml/20241025093944.707639534@infrad...
But the small futex2 patch will not go forward until some users say they want/need the feature
That's the wrong way to think about FUTEX_WAIT. What it does is "put this thread to sleep unless the value is not X".
> If you call futex wait and the value is unchanged, the sleeping thread will not wake up!
[I assume this was meant to be FUTEX_WAKE] I can't be bothered to check the kernel source or to test, but I would be surprised if this is true as it might cause missed wakeups in an ABA scenario. Futex_wake must wake up at least one successful futex_wait that happens-before the wake. Futexes are best understood as edge triggered, stateless (outside of the wait list itself) primitives, so the value at the futex location (as opposed to its address) is not really important[1], except as a guard to avoid missed wakeups.
Unfortunately the name itself (Fast Userspace Mutex) is a bit misleading, because a mutex is only one of the many things you can do with a futex. They really are a generalized waiting and signaling primitive.
[1] for plain WAIT and WAKE at least, the bitset operations or the robust futex operations are more complex and attach semantics to the value.
More importantly, what could "unchanged" even mean? For FUTEX_WAIT we provide val, a value we're saying is the value stored at the futex address, and the kernel can check that's true. But for FUTEX_WAIT val is filled out with a count - typically 1 meaning "Only wake one" or its maximum positive value meaning "everybody" although in principle if you can find a reason to wake up to 7 waiters but no more that's allowed.
Guess we'll just never know for sure, lol.
[1] https://lwn.net/Articles/944895/
For this reason, on Darwin if you want a mutex of the sort this article describes, you'll typically want to reach for os_unfair_lock, as that will donate the priority of the waiting threads to the thread that holds the lock, thus avoiding the priority inversion issue.
I have been thinking on and off on how to create a syscall-less wake operation. One way to get almost what you want is to have a polling io_uring. That still requires one kernel thread that busy polls per application. Maybe this is fine in some application architectures but it's not ideal.
It would be nice if there was a way to use Intel's debug registers to write a value to some address, which would then interrupt some kernel task, allowing that kernel task to somehow figure out what futex to wake, without the interrupter having to enter the kernel.
What you have described is literally the syscall mechanism. That's what it is. You perform some register write (via a specific instruction) and an interrupt is taken to the kernel. Maybe you believe that an asynchronous interrupt would cost less than a synchronous interrupt for this particular objective but I'm not sure there's evidence for that claim.
Apparently Intel cpus were supposed to get user space interrupts which would do exactly this. I'm not sure of hardware was ever shipped with support though.
Also look into monitor/mwait.