I am memory mapping (with mmap) a large (4GB) file in Linux for sequential read-only access.
When I read data from the mapping, I know that the OS faults in a certain number of pages into the page cache. From my research, it appears that the amount that is read (the readahead) is fairly small -- around 64K to 512K (see https://lwn.net/Articles/897786/).
For example, when I run "sudo fdisk --list /dev/sda1" for my 2TB ssd. the optimal I/O size is 33553920 bytes:
sudo fdisk --list /dev/sda1 Disk /dev/sda1: 1.84 TiB, 2000365363200 bytes, 3906963600 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 33553920 bytes
So I think I would be advised to increase to the reported optimal size, especially because I am mapping a 4GB file. But how to do that? Could I use blockdev for that? Would a larger readahead improve performance or not?
I plan to use posix_madvise. If I do that, will the readhead size automatically be adjusted to the amount specified in the posix_madvise call?
Thanks.