102
41
gpderetta 4 hours ago

The claim that the lock free array is faster then the locked variant is suspicious. The lock free array is performing a CAS for every operation, this is going to dominate[1]. A plain mutex would do two CAS (or just one if it is a spin lock), so the order of magnitude difference is not explainable by the lock free property.

Of course if the mutex array is doing a linear scan to find the insertion point that would explain the difference but: a) I can't see the code for the alternative and b) there is no reason why the mutex variant can't use a free list.

Remember:

- Lock free doesn't automatically means faster (still it has other properties that might be desirable even if slower)

- Never trust a benchmark you didn't falsify yourself.

[1] when uncontended; when contended cache coherence cost will dominate over everything else, lock-free or not.

bonzini 4 hours ago

Yes the code for the alternative is awful. However I tried rewriting it with a better alternative (basically the same as the lock free code, but with a mutex around it) and was still 40% slower. See FixedVec in https://play.rust-lang.org/?version=stable&mode=release&edit...

gpderetta 4 hours ago

Given twice the number of CASs, about twice as slow is what I would expect for the mutex variant when uncontended. I don't know enough rust to fix it myself, but could you try with a spin lock?

As the benchmark is very dependent on contention, it would give very different results if the the threads are scheduled serially as opposed to running truly concurrently (for example using a spin lock would be awful if running on a single core).

So again, you need to be very careful to understand what you are actually testing.

michaelscott 4 hours ago

For applications doing extremely high rates of inserts and reads, lock free is definitely superior. In extreme latency sensitive applications like trading platforms (events processing sub 100ms) it's a requirement; locked structures cause bottlenecks at high throughput

r3tr0 4 hours ago

totally valid.

that benchmarking is something i should have added more alternatives to.

Animats 6 hours ago

I've done a little bit of "lock-free" programming in Rust, but it's for very specialized situations.[1] This allocates and releases bits in a bitmap. The bitmap is intended to represent the slots in use in the Vulkan bindless texture index, which resides in the GPU. Can't read that those slots from the CPU side to see if an entry is in use. So in-use slots in that table have to be tracked with an external bitmap.

This has no unsafe code. It's all done with compare and swap. There is locking here, but it's down at the hardware level within the compare and swap instruction. This is cleaner and more portable than relying on cross-CPU ordering of operations.

[1] https://github.com/John-Nagle/rust-vulkan-bindless/blob/main...

MobiusHorizons 8 hours ago

I enjoyed the content, but could have done without the constant hyping up of the edginess of lock free data structures. I mean yes, like almost any heavily optimized structure there are trade offs that prevent this optimization from being globally applicable. But also being borderline aroused at the “danger” and rule breaking is tiresome and strikes me as juvenile.

lesser23 7 hours ago

The bullet points and some of the edge definitely smell like LLM assistance.

Other than that I take the other side. I’ve read (and subsequently never finished) dozens of programming books because they are so god awfully boring. This writing style, perhaps dialed back a little, helps keep my interest. I like the feel of a zine where it’s as technical as a professional write up but far less formal.

I often find learning through analogy useful anyway and the humor helps a lot too.

bigstrat2003 8 hours ago

To each their own. I thought it was hilarious and kept the article entertaining throughout, with what would otherwise be a fairly dry subject.

atoav 7 hours ago

It is juvenile, but what do we know? Real Men use after free, so they wouldn't even use Rust to begin with.

The edgy tones sound like from an LLM to me..

tombert 11 hours ago

Pretty interesting.

I have finally bitten the bullet and learned Rust in the last few months and ended up really liking it, but I have to admit that it's a bit lower level than I generally work in.

I have generally avoided locks by making very liberal use of Tokio channels, though that isn't for performance reasons or anything: I just find locks really hard to reason about for anything but extremely trivial usecases, and channels are a more natural abstraction for me.

I've never really considered what goes into these lock-free structures, but that might be one of my next "unemployment projects" after I finish my current one.

forgot_old_user 10 hours ago

definitely! Reminds me of the golang saying

> Don't Communicate by Sharing Memory; Share Memory by Communicating

https://www.php.cn/faq/1796714651.html

tombert 9 hours ago

Yeah, similarly, Joe Armstrong (RIP), co-creator of Erlang explained it to me like this:

> In distributed systems there is no real shared state (imagine one machine in the USA another in Sweden) where is the shared state? In the middle of the Atlantic? — shared state breaks laws of physics. State changes are propagated at the speed of light — we always know how things were at a remote site not how they are now. What we know is what they last told us. If you make a software abstraction that ignores this fact you’ll be in trouble.

He wrote this to me in 2014, and it has really informed how I think about these things.

throwawaymaths 9 hours ago

The thing is that go channels themselves are shared state (if the owner closes the channel and a client tries to write you're not gonna have a good time)! Erlang message boxes are not.

aatd86 8 hours ago

Isn't entanglement in quantum physics the manifestation of shared state? tongue-in-cheek

gpderetta 4 hours ago

> Don't Communicate by Sharing Memory; Share Memory by Communicating

that's all well and good until you realize you are reimplementing a slow, buggy version of MESI in software.

Proper concurrency control is the key. Shared memory vs message passing is incidental and application specific.

revskill 9 hours ago

How can u be unemployed ?

zero0529 7 hours ago

Like the writing style but would prefer if it was dialed down maybe 10 %. Otherwise a great article as an introduction to lock-free datastructures.

Fiahil 1 hour ago

You can go one step further if :

- you don't reallocate the array

- you don't allow updating/ removing past inserted values

In essence it become a log, a Vec<OnceCell<T>> or a Vec<UnsafeCell<Option<T>>>. Works well, but only for a bounded array. So applications like messaging, or inter-thread communication are not a perfect fit.

It's a fixed-size vector that can be read at the same time it's being written to. It's no a common need.

0x1ceb00da 11 hours ago

> AtomicUsize: Used for indexing and freelist linkage. It’s a plain old number, except it’s watched 24 / 7 by the CPU’s race condition alarm.

Is it though? Aren't atomic load/store instructions the actual important thing. I know the type system ensures that `AtomicUsize` can only be accessed using atomic instructions but saying it's being watched by the CPU is inaccurate.

eslaught 9 hours ago

I'm not sure what the author intended, but one way to implement atomics at the microarchitectural level is via a load-linked/store-conditional pair of instructions, which often involves tracking the cache line for modification.

https://en.wikipedia.org/wiki/Load-link/store-conditional

It's not "24/7" but it is "watching" in some sense of the word. So not entirely unfair.

jillesvangurp 4 hours ago

This is the kind of stuff that you shouldn't have to reinvent yourself but be able to reuse from a good library. Or the standard library even.

How would this compare to the lock free abstractions that come with e.g. the java.concurrent package? It has a lot of useful primitives and data structures. I expect the memory overhead is probably worse for those.

Support for this is one of the big reason Java and the jvm has been a popular choice for companies building middleware and data processing frameworks for the last few decades. Exactly the kind of stuff that the author of this article is proposing you could build with this. Things like Kafka, Lucene, Spark, Hadoop, Flink, Beam, etc.

gpderetta 4 hours ago

> This is the kind of stuff that you shouldn't have to reinvent yourself but be able to reuse from a good library. Or the standard library even.

Indeed; normally we call it the system allocator.

A good system allocator will use per thread or per cpu free-lists so that it doesn't need to do CAS loops for every allocation though. At the very least will use hashed pools to reduce contention.

jonco217 4 hours ago

> NOTE: In this snippet we ignore the ABA problem

The article doesn't go into details but this is subtle way to mess up writing lock free data structures:

https://en.wikipedia.org/wiki/ABA_problem

r3tr0 53 minutes ago

i will do another one on just the ABA problem and how many different ways it can put your program in the hospital.

sph 34 minutes ago

Obligatory video from Jon Gjengset “Crust of Rust: Atomics and Memory Ordering”: https://youtu.be/rMGWeSjctlY?si=iDhOLFj4idOOKby8

fefe23 9 hours ago

To borrow an old adage: The determined programmer can write C code in any language. :-)

MobiusHorizons 8 hours ago

Atomics are hardly “C”. They are a primative exposed many CPU ISAs for helping to navigate the complexity those same CPUs introduced with OOO execution and complex caches in a multi-threaded environment. Much like simd atomics require extending the language through intrinsics or new types because they represent capabilities that were not possible when the language was invented. Atomics require this extra support in Java just as they do in rust or C.

pjmlp 1 hour ago

That screenshot is very much CDE inspired.

lucraft 3 hours ago

Can I ask a dumb question - how is Atomic set operation implemented internally if not by grabbing a lock?

Asraelite 8 minutes ago

It does use locks. If you go down deep enough you eventually end up with hardware primitives that are effectively locks, although they might not be called that.

The CPU clock itself can be thought of as a kind of lock.

moring 3 hours ago

Two things that come to my mind:

1. Sometimes "lock-free" actually means using lower-level primitives that use locks internally but don't expose them, with fewer caveats than using them at a higher level. For example, compare-and-set instructions offered by CPUs, which may use bus locks internally but don't expose them to software.

2. Depending on the lower-level implementation, a simple lock may not be enough. For example, in a multi-CPU system with weaker cache coherency, a simple lock will not get rid of outdated copies of data (in caches, queues, ...). Here I write "simple" lock because some concepts of a lock, such as Java's "synchronized" statement, bundle the actual lock together with guaranteed cache synchronization, whether that happens in hardware or software.

gpderetta 2 hours ago

Reminder that lock-free is a term of art with very specific meaning about starvation-freedom and progress and has very little to do with locking.

gpderetta 2 hours ago

The hardware itself is designed to guarantee it. For example, the core guarantees that it will perform the load + compare + store from a cacheline in a finite number of cycles, while the cache coherency protocol guarantees that a) the core will eventually (i.e. it is fair) be able to acquire the cacheline in exclusive mode and b) will be able to hold it for a minimum number of clock cycles before another core forces an eviction or a downgrade of the ownership.

sophacles 2 hours ago

Most hardware these days has intrinsic atomics - they are built into the hw in various ways, both in memory model guarantees (e.g. x86 has a very strong guarantees of cache coherency, arm not so much), and instructions (e.g. xchg on x86). The deatails vary a lot between different cpu architectures, which is why C++ and Rust have memory models to program to rather than the specific semantics of a given arch.

r3tr0 2 days ago

hope you enjoy this article on lock free programming in rust.

I used humor and analogies in the article not just to be entertaining, but to make difficult concepts like memory ordering and atomics more approachable and memorable.

nmca 9 hours ago

Did you get help from ChatGPT ooi? The humour sounds a bit like modern ChatGPT style but it’s uncanny valley.

bobbyraduloff 8 hours ago

at the very least that article was definitely edited with ChatGPT. i had someone on my team write “edgy” copy with ChatGPT last week and it sounded exactly the same. short paragraphs and overuse of bullet points are also a dead giveaway. i don’t think it’s super noticeable if you don’t use ChatGPT a lot but for the people that use these systems daily, it’s still very easy to spot.

my suggestion to OP: this was interesting material, ChatGPT made it had to read. use your own words to explain it. most people interested in this deeply technical content would rather read your prompt than the output.

r3tr0 4 hours ago

i had help GPT help with some grammar, editing, and shortening.

The core ideas, jokes, code, and analogies are 100% mine.

Human chaos. Machine polish.

tombert 11 hours ago

Interesting read, I enjoyed it and it answered a question that I didn't even realize I had been asking myself for years, which is how lock-free structures work.

Have you looked at CTries before? They're pretty interesting, and I think are probably the future of this space.

ephemer_a 10 hours ago

did 4o write this