willvarfar 3 days ago

I've spent a lot of time jumping in to optimise horrendously complicated programs and data pipelines and things. The big wins are from understanding the domain and spotting all the inevitable misunderstandings and unnecessary steps that creep in when systems are so big they get split up and built/maintained by different people at different times.

1
pyfon 3 days ago

This! Extending the list of 4:

5. Remove unrequired behaviour.

6. Negotiate the required behaviour with stakeholders.

7. UX changes. E.g. make a synchronous flow a background job and notification. Bring back quick parts of the operation sooner (e.g. like a progressive jpg)

8. Architecture changes. E.g. monolithification, microservification. Lambdas vs. VM vs. Fargate etc.

And some more technical:

9. Caches?

10. Scalability, more VMs

11. Move compute local (on client, on edge comoute, nearby region)

11a. Same as 11 for data residency.

12. Data store optimisation e.g. indices, query plans, foreign keys, consistent hashing (arguably a repeat of data structures)

13. Use a data centre for more bang/buck than cloud.

14. Compute type: use GPU instead of CPU etc. I'll bundle here L1 cache etc.

15. Look at sources of latency. proxies, sidecars, network hops (and their location) etc.

16. GC pauses, event loops, threading, other processed etc.