Sounds like a fun project. However, from the readme:
Efficient file listing: Optimized for speed, even in large directories
What exactly is it doing differently to optimize for speed? Isn't it just using the regular fs lib?
On my system it uses twice as much CPU as plain old ls in a directory with just 13k files. To recursively list a directory with 500k leaf files, lla needs > 10x as much CPU. Apparently it is both slower and with higher complexity.
Will definitely prioritize optimization in the next releases. Planning to benchmark against ls on various systems and file counts to get this properly sorted.
Not trying to “gotcha” you, but I would imagine that 10x the CPU of ls is still very little, or am I wrong?
But it’s written in rust so it’s super fast. Did you take that into account when running your benchmarks? /s