I create a list of a million int
objects, then replace each with its negated value. tracemalloc
reports 28 MB extra memory (28 bytes per new int
object). Why? Does Python not reuse the memory of the garbage-collected int
objects for the new ones? Or am I misinterpreting the tracemalloc
results? Why does it say those numbers, what do they really mean here?
import tracemalloc
xs = list(range(10**6))
tracemalloc.start()
for i, x in enumerate(xs):
xs[i] = -x
print(tracemalloc.get_traced_memory())
Output (Try it online!):
(27999860, 27999972)
If I replace xs[i] = -x
with x = -x
(so the new object rather than the original object gets garbage-collected), the output is a mere (56, 196)
(try it). How does it make any difference which of the two objects I keep/lose?
And if I do the loop twice, it still only reports (27992860, 27999972)
(try it). Why not 56 MB? How is the second run any different for this than the first?
Short Answer
tracemalloc was started too late to track the inital block of memory, so it didn't realize it was a reuse. In the example you gave, you free 27999860 bytes and allocate 27999860 bytes, but tracemalloc can't 'see' the free. Consider the following, slightly modified example:
On my machine (python 3.10, but same allocator), this displays:
After we allocate xs, the system has allocated 35993436 bytes, and after we run the loop we have a net total of 36000576. This shows that the memory usage isn't actually increasing by 28 Mb.
Why does it behave this way?
Tracemalloc works by overriding the standard internal methods for allocating with
tracemalloc_alloc
, and the similar free and realloc methods. Taking a peek at the source:We see that the new allocator does two things:
1.) Call out to the "old" allocator to get memory
2.) Add a trace to a special table, so we can track this memory
If we look at the associated free functions, it's very similar:
1.) free the memory
2.) Remove the trace from the table
In your example, you allocated xs before you called
tracemalloc.start()
, so the trace records for this allocation are never put in the memory tracking table. Therefore, when you call free on the initial array data, the traces aren't removed, and thus your weird allocation behavior.Why is the total memory usage 36000000 bytes and not 28000000
Lists in python are weird. They're actually a list of pointer to individually allocated objects. Internally, they look like this:
PyObject_HEAD is a macro that expands to some header information all python variables have. It is just 16 bytes, and contains pointers to type data.
Importantly, a list of integers is actually a list of pointer to PyObjects that happen to be ints. On the line
xs = list(range(10**6))
, we expect to allocate:PyLongObject
in the underlying implmentation)For a grand total of 36000024 bytes. That number looks pretty farmiliar!
When you overwrite a value in the array, your just freeing the old value, and updating the pointer in PyListObject->ob_item. This means the array structure is allocated once, takes up 8000024 bytes, and lives to the end of the program. Additionally, 1000000 Integer objects are each allocated, and references are put in the array. They take up the 28000000 bytes. One by one, they are deallocated, and then the memory is used to reallocate a new object in the loop. This is why multiple loops don't increase the amount of memory.