We are designing an application where users can set multiple tasks running simultaneously. We use ThreadPool.QueueUserWorkItem to set the tasks running. That part runs well.
We do have a problem where these tasks can consume 500MB+ of memory. We are using memory-mapped I/O to manage the memory. Still when users set 10+ tasks running simultaneously, the threadpool will start all of them and there have been times that we run out of memory and exceptions occur. We can handle the errors just fine.
What I am wondering is if there is a way to take the memory that will be consumed into account when processing the queue, i.e. keeping tasks queued until sufficient memory exists? Can I inform the threadpool about how much memory we will be asking for (which we can roughly estimate)?
OK, if you can get a memory estimate for each task, you could probably do this by keeping a rough CS-protected count of memory usage in the pool. Add to this count just before a task is submitted and have the task call a 'memRelease' function to subtract from it just before it ends and check to see if anything can be run now, (see later).
If some thread wants to submit a task and it is found, (by comparing its needs with the current usage inside the CS), that there is insufficient memory left in the 'budget' to run it, you could shove it into a concurrent queue/list to wait until there is. Whenever a task completes and calls 'memRelease', it adds to the memory buget and iterates the queue/list, (locking it first), to try an find something that can now be run with the increased memory available. If so, it submits the task to the pool.