I have a app that reads data from a text file using:
CRD.reader = new StreamReader(fn,Encoding.UTF8,true,1024);
CRD.readLine();
BUT I run 16 instances of this app in parallel on my 24 core machine. When I do this the total time taken is much greater than time it takes a single instance running on it's own (even though they are running in parallel). I assume this is because of contention for the disk?
I saw a suggestion for using a bufferedstream, but I don't understand how that differs from the code above. Surely by specifying the buffer size as I have - I am using a "buffered" stream already?
For my code, I have tried various different size for the buffer - but it does not appear to make much difference.
EDIT 1
If anyone could explain how a bufferedstream differs from what I am doing - that would be very helpful
EDIT 2
If I set a large buffer with
CRD.reader = new StreamReader(fn,Encoding.UTF8,true,65536);
CRD.readLine();
Can I force the whole buffer to be filled on first readLine? i.e. if my buffer > than filesize the whole file could/should be read into memory. It seems to me that the operating system works by allowing that much buffer, BUT not necessarily using it.