TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my workstation.
However, I would like to log how much memory (in sum) TensorFlow really uses. Additionally it would be really nice, if I could also log how much memory single tensors use.
This information is important to measure and compare the memory size that different ML/AI architectures need.
Any tips?
Update, can use TensorFlow ops to query allocator:
Also you can get detailed information about
session.run
call including all memory being allocations duringrun
call by looking atRunMetadata
. IE something like thisHere's an end-to-end example -- take column vector, row vector and add them to get a matrix of additions:
If you open
run.txt
you'll see messages like this:So here you can see that
a
andb
allocated 52 bytes each (13*4), and the result allocated 676 bytes.