I am on Ubuntu 12.04 using ext4. I wrote a python program that does small size (mostly 512 byte) read and write with somewhat random access pattern. I found that as the file gets larger and larger. It takes more and more time to do the same number of I/Os. The relationship is linear. In other words, I get O(n2) where n is the cumulative number of I/Os.
I wonder if there is an inherent reason why small I/O being slower as file size increases.
One more observation: When I mounted a ramdisk and did my File I/O to the ramdisk I do NOT observe this performance degradation.
depending on how you're doing the IO it might be that you're trying to call too much into memory before saving it off.