There are ~10 million files on a disk (not under the same directory).
I want to get [(file_name, file_size, file_atime)] of all files. But the command
find /data -type f -printf "%p\t%A@\t%s\n"
is hopelessly slow and cause the IO %util ~100%.
Any advice?
Not much you can do.
Check if you are using directory indexes (
dir_index
).If you are desperate you can use
debug2fs
and read the data raw, but I would not recommend it.You can also buy an SSD - the slowness is probably from seeking, if you do with often an SSD will speed things up quite a bit.