n_u 8 days ago

Thanks for the example! I'll play around with it.

> On my machine, the inflection point is around 10^14 goroutines - after which the mutex version becomes drastically slower;

How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.

> Why? Designing and implementing concurrent runtimes has not made its way onto my CV yet; hopefully a lurking Go contributor can comment. > If I had to guess, the channel mutex may be specialised since it protects only enqueuing or dequeuing onto a simple buffer. A sync.Mutex is a general construct that can protect any kind of critical region.

Haha fair enough, I also know little about mutex implementation details. Optimized specialized tool vs generic tool feels like a reasonable first guess.

Though I wonder if you are able to use channels for more generic mutex purposes is it less efficient in those cases? I guess I'll have to do some benchmarking myself.

> If the communication pattern is passing streams of work where goroutines have an acyclic communication dependency graph, then it's a no brainer: channels will be performant and a deadlock will be hard to introduce.

I agree with your rules, I used to always use channels for single processt thread-safe queues (similar to your Kafka rule) but recently I ran into a cyclic communication pattern with a queue and eventually relented to using a Mutex. I wonder if there are other painful channel concurrency patterns lurking for me to waste time on.

1
franticgecko3 8 days ago

> How often are you reaching 10^14 goroutines accessing a shared resource on a single process in production? We mostly use short-lived small AWS spot instances so I never see anything like that.

I apologize, that should've said 2^14, each sub-benchmark is a doubling of goroutines.

2^14 is 16000, which for contention of a shared resource is quite a reasonable order of magnitude.