I have been rewriting some process intensive looping to use TPL to increase speed. This is the first time I have tried threading, so want to check what I am doing is the correct way to do it.
The results are good - processing the data from 1000 Rows in a DataTable
has reduced processing time from 34 minutes to 9 minutes when moving from a standard foreach
loop into a Parallel.ForEach
loop. For this test, I removed non thread safe operations, such as writing data to a log file and incrementing a counter.
I still need to write back into a log file and increment a counter, so i tried implementing a lock which encases the streamwriter/increment code block.
FileStream filestream = new FileStream("path_to_file.txt", FileMode.Create);
StreamWriter streamwriter = new StreamWriter(filestream);
streamwriter.AutoFlush = true;
try
{
object locker = new object();
// Lets assume we have a DataTable containing 1000 rows of data.
DataTable datatable_results;
if (datatable_results.Rows.Count > 0)
{
int row_counter = 0;
Parallel.ForEach(datatable_results.AsEnumerable(), data_row =>
{
// Process data_row as normal.
// When ready to write to log, do so.
lock (locker)
{
row_counter++;
streamwriter.WriteLine("Processing row: {0}", row_counter);
// Write any data we want to log.
}
});
}
}
catch (Exception e)
{
// Catch the exception.
}
streamwriter.Close();
The above seems to work as expected, with minimal performance costs (still 9 minutes execution time). Granted, the actions contained in the lock are hardly significant themselves - I assume that as the time taken to process code within the lock increases, the longer the thread is locked for, the more it affects processing time.
My question: is the above an efficient way of doing this or is there a different way of achieving the above that is either faster or safer?
Also, lets say our original DataTable
actually contains 30000 rows. Is there anything to be gained by splitting this DataTable
into chunks of 1000 rows each and then processing them in the Parallel.ForEach
, instead of processing all 300000 rows in one go?
Writing to the file is expensive, you're holding a exclusive lock while writing to the file, that's bad. It's going to introduce contention.
You could add it in a buffer, then write to the file all at once. That should remove contention and provide way to scale.
Parallel.ForEach
provides you one. Difference is that it is not the counter but index. If I've changed the expected behavior you can still add the counter back and useInterlocked.Increment
to increment it.streamwriter.AutoFlush = true
, that will hurt performance, you can set it tofalse
and flush it once you're done writing all the data.If possible, wrap the
StreamWriter
in using statement, so that you don't even need to flush the stream(you get it for free).Alternatively, you could look at the logging frameworks which does their job pretty well. Example: NLog, Log4net etc.