To limit memory usage I read How to prevent tensorflow from allocating the totality of a GPU memory? and tried this code :
# Assume that you have 12GB of GPU memory and want to allocate ~4GB:
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.333)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
These commands did free up memory but but memory is not de-allocated after code completion. This issue describes : https://github.com/tensorflow/tensorflow/issues/3701 a suggested fix is to update the driver “After upgrading the GPU driver from 352.79 to 367.35 (the newest one), the problem disappeared. “ Unfortunately I'm not in position to update to latest version of driver. Has this issue been resolved.
I also considered limiting the available memory to the docker container. Reading https://devblogs.nvidia.com/parallelforall/nvidia-docker-gpu-server-application-deployment-made-easy/ states "Containers can be constrained to a limited set of resources on a system (e.g one CPU core and 1GB of memory)” but kernel does not currently support this, here I try to add 1GB of memory to new docker instance :
nvidia-docker run -m 1024m -d -it -p 8889:8889 -v /users/user1234/jupyter:/notebooks --name tensorflow-gpu-1GB tensorflow/tensorflow:latest-cpu
But this does not appear possible as receive warning : WARNING: Your kernel does not support swap limit capabilities, memory limited without swap."
Is there a command to free memory after tensorflow python workbook completion ?
Update
After killing / restarting the notebook the memory is de-allocated. But how to free memory after completion within the notebook.
Ipython and jupyter notebooks will not free memory unless you use
del
orxdel
on your objects:https://ipython.org/ipython-doc/3/interactive/magics.html