I have a C application that runs on Linux, Solaris, and AIX. I have used tools like Totalview's MemoryScape to track down memory leaks on Linux and it is 100% clean. However, I have noticed a small leak on Solaris.
So I have been using "libumem" on Solaris to try find the leaks.
My application either calls a "user exit" (via subprocess call) or doesn't.
So if I run the application with no user exits (therefore NO subprocess call) then libumem works 100%....and I see no leaks still...
LD_PRELOAD=libumem.so UMEM_DEBUG=audit ./myapplication config.ini
But when I turn on user exits call so that the main application calls subprocesses, then I get the following printed to STDOUT by the subprocess during runtime:
ld.so.1: userexit_proxy: fatal: libmapmalloc.so.1: No such file or directory
NOTE that if I do not use "libumem" then the application runs 100%...(just a tiny memory leak still)
Now my application is compiled in 64bit, and I notice that the /usr/lib/libmapmalloc.so.1 is 32 bit but that should not make a difference....
Any idea how I can use libumem on an application that also calls subprocesses?
NOTE: I have also tried to EXPORT the variables to the whole environment, still no luck
export LD_PRELOAD=libumem.so export UMEM_DEBUG=audit
Also, please correct me if I am wrong but if a subprocess completes then any "leaked memory" in that subprocess would be freed automatically right? So I can assume no leaks on Solaris are coming from the subprocess call?
Any help in this regard will be greatly appreciated
Thanks for the help
Lynton
This behavior has already been observed when code using dlerror wrongly assumes it to return a non null value while dlopen succeed (see this mail: indiana discuss.) I would start by tracing your application to see if these functions are called and how.
/usr/lib/libmapmalloc.so.1 is indeed32 bit but if your application is 64 bit, it uses something like /usr/lib/amd64/libmapmalloc.so or similar.
You are correct stating that when a (sub) process ends, all of its memory allocation is freed.