We are designing an application where users can set multiple tasks running simultaneously. We use ThreadPool.QueueUserWorkItem to set the tasks running. That part runs well.
We do have a problem where these tasks can consume 500MB+ of memory. We are using memory-mapped I/O to manage the memory. Still when users set 10+ tasks running simultaneously, the threadpool will start all of them and there have been times that we run out of memory and exceptions occur. We can handle the errors just fine.
What I am wondering is if there is a way to take the memory that will be consumed into account when processing the queue, i.e. keeping tasks queued until sufficient memory exists? Can I inform the threadpool about how much memory we will be asking for (which we can roughly estimate)?
The ThreadPool knows nothing about what you do in your tasks. You need to ensure this yourself. You can manage a global variable of type long representing the total amount of bytes that all running jobs are likely to need at peak. When the threadpool schedules one of your tasks, you first check that variable. If it is already too high you wait until any task that is currently running has exited. Then you check again.
A low-tech solution to this is to use polling with 100ms sleep interval. A high-tech version would use some kind of waiting scheme involving events.