atombender 13 hours ago

I've been optimizing inserts recently, and I've already reached the limit here (i.e. already batching in parallel etc.).

The final thing I was considering doing was to implement a dynamic batch/concurrency scheduler. Something like hill climbing or AIMD, where the system tries to increase the batch size and concurrency until a sweet spot is found. It seems crazily unscientific to me for a system to hard-code the concurrency and batch size when said system may be running under different load at any time, not to mention on different hardware in the future.

Has anyone here found a nice algorithm for this? There seem to be a number of options.

Something else not mentioned in the article is that you can significantly increase insert performance by dropping indexes first and creating them after. That's normally not possible, of course, unless you are creating a table/partition just for the one insert job. I wish Postgres had a batch mode where you could say "here are all my rows, insert them and only build indexes when I commit".

1
abelanger 13 hours ago

While we didn't look too closely into this, our buffers used to scale capacity based on a Fibonacci sequence if we hit capacity within the flush interval, up to a max capacity. I'm sure there's much better prior work on the ideal scaling laws here but this worked really well for us, and I'm sure any exponential would do fine.

When we rewrote our inserts system, we opted to just ensure that the buffers would flush immediately on first write, which makes writes fast on anything that's on the order of < 1k writes/s with constant load over the second (10 buffers, 10ms flush interval), which is a pretty good baseline for our system. This is simpler and has been working really well for us, so there wasn't a need at this point to reintroduce any sort of exponential scaling, but I'll be experimenting more with this.

atombender 13 hours ago

So you measure the throughput and slowly increase the batch size until the throughput stabilizes?

That's better than nothing, of course, but I would very much like to also be able to ramp down if the throughput suddenly drops (e.g. due to competing activity).