> lets say it can process a request normally in 20us.
Then what if the OS/thread hangs? Or maybe a hardware issue even. Seems a bit weird to have critical path be blocked by a single mutex. That's a recipe for problems or am I missing something?
Hardware issues happen, but if you're lucky it's a simple failure and the box stops dead. Not much fun, but recovery can be quick and automated.
What's real trouble is when the hardware fault is like one of the 16 nic queues stopped, so most connections work, but not all (depends on the hash of the 4-tuple) or some bit in the ram failed and now you're hitting thousands of ECC correctable errors per second and your effective cpu capacity is down to 10% ... the system is now too slow to work properly, but manages to stay connected to dist and still attracts traffic it can't reasonably serve.
But OS/thread hangs are avoidable in my experience. If you run your beam system with very few OS processes, there's no reason for the OS to cause trouble.
But on the topic of a 15ms pause... it's likely that that pause is causally related to cascading pauses, it might be the beginning or the end or the middle... But when one thing slows down, others do too, and some processes can't recover when the backlog gets over a critical threshold which is kind of unknowable without experiencing it. WhatsApp had a couple of hacks to deal with this. A) Our gen_server aggregation framework used our hacky version of priority messages to let the worker determine the age of requests and drop them if they're too old. B) we had a hack to drop all messages in a process's mailbox through the introspection facilities and sometimes we automated that with cron... Very few processes can work through a mailbox with 1 million messages, dropping them all gets to recovery faster. C) we tweaked garbage collection to run less often when the mailbox was very large --- i think this is addressed by off-heap mailboxes now, but when GC looks through the mailbox every so many iterations and the mailbox is very large, it can drive an unrecoverable cycle as eventually GC time limits throughput below accumulation and you'll never catch up. D) we added process stats so we could see accumulation and drain rates and estimate time to drain / or if the process won't drain and built monitoring around that.
> we had a hack to drop all messages in a process's mailbox through the introspection facilities and sometimes we automated that with cron...
What happens to the messages? Do they get processed at a slower rate or on a subsystem that works in the background without having more messages being constantly added? Or do you just nuke them out of orbit and not care? That doesn't seem like a good idea to me since loss of information. Would love to know more about this!
Nuked; it's the only way to be sure. It's not that we didn't care about the messages in the queue, it's just there's too many of them, they can't be processed, and so into the bin they go. This strategy is more viable for reads and less viable for writes, and you shouldn't nuke the mnesia processes's queues, even when they're very backlogged ... you've got to find a way to put backpressure on those things --- maybe a flag to error out on writes before they're sent into the overlarge queue.
Mostly this is happening in the context of request/response. If you're a client and connect to the frontend you send a auth blob, and the frontend sends it to the auth daemon to check it out. If the auth daemon can't respond to the frontend in a reasonable time, the frontend will drop the client; so there's no point in the auth daemon looking at old messages. If it's developed a backlog so high it can't get it back, we failed and clients are having trouble connecting, but the fastest path to recovery is dropping all the current requests in progress and starting fresh.
In some scenarios even if the process knew it was backlogged and wanted to just accept messages one at a time and drop them, that's not fast enough to catch up to the backlog. The longer you're in unrecoverable backlog, the worse the backlog gets, because in addition to the regular load from clients waking up, you've also got all those clients that tried and failed going to retry. If the outage is long enough, you do get a bit of a drop off, because clients that can't connect don't send messages that require waking up other clients, but that effect isn't so big when you've only got a large backlog a few shards.
If the user client is well implemented either it or the user notices that an action didn't take effect and tries again, similar to what you would do if a phone call was disconnected unexpectedly or what most people would do if a clicked button didn't have the desired effect, i.e. click it repeatedly.
In many cases it's not a big problem if some traffic is wasted, compared to desperately trying to process exactly all of it in the correct order, which at times might degrade service for every user or bring the system down entirely.
Depending on what you want to do, there are ways to change where the blocking occurs, like https://blog.sequinstream.com/genserver-reply-dont-call-us-w...
Part of the problem with BEAM is it doesn't have great ways of dealing with concurrency beyond gen_server (effectively a mutex) and ETS tables (https://www.erlang.org/doc/apps/stdlib/ets). So I think usually the solution would be to use ETS if its possible which is kind of like a ConcurrentHashMap in other languages or to shard or replicate the shared state so it can be accessed in parallel. For read only data that does not change very often the BEAM also has persistent term (https://www.erlang.org/doc/apps/erts/persistent_term.html).