Can I ask a dumb question - how is Atomic set operation implemented internally if not by grabbing a lock?
Two things that come to my mind:
1. Sometimes "lock-free" actually means using lower-level primitives that use locks internally but don't expose them, with fewer caveats than using them at a higher level. For example, compare-and-set instructions offered by CPUs, which may use bus locks internally but don't expose them to software.
2. Depending on the lower-level implementation, a simple lock may not be enough. For example, in a multi-CPU system with weaker cache coherency, a simple lock will not get rid of outdated copies of data (in caches, queues, ...). Here I write "simple" lock because some concepts of a lock, such as Java's "synchronized" statement, bundle the actual lock together with guaranteed cache synchronization, whether that happens in hardware or software.
Reminder that lock-free is a term of art with very specific meaning about starvation-freedom and progress and has very little to do with locking.
The hardware itself is designed to guarantee it. For example, the core guarantees that it will perform the load + compare + store from a cacheline in a finite number of cycles, while the cache coherency protocol guarantees that a) the core will eventually (i.e. it is fair) be able to acquire the cacheline in exclusive mode and b) will be able to hold it for a minimum number of clock cycles before another core forces an eviction or a downgrade of the ownership.
It does use locks. If you go down deep enough you eventually end up with hardware primitives that are effectively locks, although they might not be called that.
The CPU clock itself can be thought of as a kind of lock.
Most hardware these days has intrinsic atomics - they are built into the hw in various ways, both in memory model guarantees (e.g. x86 has a very strong guarantees of cache coherency, arm not so much), and instructions (e.g. xchg on x86). The deatails vary a lot between different cpu architectures, which is why C++ and Rust have memory models to program to rather than the specific semantics of a given arch.