jmyeet 8 days ago

The biggest mistake I see people make with Go channels is prematurely optimizing their code by making channels buffered. This is almost always a mistake. It seems logical. You don't want your code to block.

In reality, you've just made your code unpredictable and there's a good chance you don't know what'll happen when your buffered channel fills up and your code then actually blocks. You may have a deadlock and not realize it.

So if the default position is unbuffered channels (which it should be), you then realize at some point that this is an inferior version of cooperative async/await.

Another general principle is you want to avoid writing multithreaded application code. If you're locking mutexes or starting threads, you're probably going to have a bad time. An awful lot of code fits the model of serving an RPC or HTTP request and, if you can, you want that code to be single-threaded (async/await is fine).

2
franticgecko3 8 days ago

>The biggest mistake I see people make with Go channels is prematurely optimizing their code by making channels buffered. This is almost always a mistake. It seems logical. You don't want your code to block.

Thank you. I've fixed a lot of bugs in code that assumes because a channel is buffered it is non-blocking. Channels are always blocking, because they have a fixed capacity; my favorite preemptive fault-finding exercise is to go through a codebase and set all channels to be unbuffered, lo-and-behold there's deadlocks everywhere.

If that is the biggest mistake, then the second biggest mistake is attempting to increase performance of an application by increasing channel sizes.

A channel is a pipe connecting two workers, if you make the pipe wider the workers do not process their work any faster, it makes them more tolerant of jitter and that's it. I cringe when I see a channel buffer with a size greater than ~100 - it's a a telltale sign of a misguided optimization or finger waving session. I've seen some channels sized at 100k for "performance" reasons, where the consumer is pushing out to the network, say 1ms for processing and network egress. Are you really expecting the consumer to block for 100 seconds, or did you just think bigger number = faster?

dfawcus 8 days ago

Yup, most of my uses were unbuffered, or with small buffers (i.e. 3 slots or fewer), often just one slot.

sapiogram 8 days ago

> So if the default position is unbuffered channels (which it should be), you then realize at some point that this is an inferior version of cooperative async/await.

I feel so validated by this comment.