.NET 2.0 ThreadPool Thread Stack Increase Causing Out of Memory Exception (Commit Limit Reached)

156 Views Asked by At

OS: Windows 7 Embedded

RAM: 1 GB

Paging File Size: 500 MB

Remaining Disk Space: ~1 GB

.NET: 2.0

I am working on a .NET 2.0 Winforms application written in C# that is running on Windows 7 Embedded, where the system only has 1 GB of RAM, and fairly limited free disk space (~1 GB). We are migrating to Windows 7 Embedded from Windows XP Embedded, and while our program works well on Windows XP, it has failed several times due to an out of memory exception on Windows 7. We have added a 500 MB paging file since the failures (the default for Windows 7 Embedded seems to be a paging file size of 0MB). However, because we have narrowed down the out of memory exception to be caused by an increase in the program's committed memory, it is possible the system commit limit will be reached over long enough periods of time, even with the paging file. We cannot quickly migrate our hardware to something more appropriate, and must find a software solution that can fix the issue before the next release.

Using SysInternals VMMap tool, we can see that the number of thread stacks in the program's virtual memory slowly increases over time, eventually causing a failure when the program's committed memory results in the system exceeding its commit limit. The cause for the thread stack increase has been isolated to the .NET 2.0 ThreadPool which creates a net positive number of threads over time for some reason. We think that this is because of an overuse of the System.Timers.Timer class in our code, with each instance running its timer elapsed event handler on the ThreadPool, though it is still not clear why the ThreadPool keeps so many threads around even when it is assumed they are not always used by the timer callbacks. Some of these event handlers process information for longer than would be ideal for a ThreadPool thread, with the worst offenders even calling Thread.Sleep().

We have thought up several possible solutions for this problem, including putting a limit on the number of ThreadPool worker threads, swapping the longer running timer callbacks for threads, and migrating to a higher version of .NET. The last solution assumes that optimizations to the ThreadPool manager have been made over iterations of .NET, which may help to alleviate the problem. Are there any other obvious (or non-obvious) solutions that we have missed?

Edit: On further inspection, the ThreadPool threads that get generated and stick around have the following call stack:

ntdll!KiFastSystemCallRet 
ntdll!ZwWaitForSingleObject+c 
KERNELBASE!WaitForSingleObjectEx+98 
kernel32!WaitForSingleObjectExImplementation+75 
mscorwks!PEImage::LoadImage+1af 
mscorwks!CLREvent::WaitEx+117 
mscorwks!CLREvent::Wait+17 
mscorwks!ThreadpoolMgr::SafeWait+73 
mscorwks!ThreadpoolMgr::WorkerThreadStart+11c 
mscorwks!Thread::intermediateThreadProc+49 
kernel32!BaseThreadInitThunk+e 
ntdll!__RtlUserThreadStart+70 
ntdll!_RtlUserThreadStart+1b 
0

There are 0 best solutions below