What does Scalene profiler tools "peak" memory mean?

206 Views Asked by At

I am running a docker container with memory limit set to 6GB. Inside this container there is a Python program executed using Scalene profiling tool. In the results there is a line which is indicated to have "peak" memory consumption of 13.4GB. This should not be possible because the conatienr should get killed when exceeding 6GB. So what does Scalene "peak" memory consumption mean and how it is derived? Is it perhaps the accumulative memory consumption of the entire program execution?

enter image description here

1

There are 1 best solutions below

0
On

I asked the same question in Scalene "Q&A" and got the answer. Basically, this is the requested memory allocation which might not actually be used physically. In case it was used, the memory limit would be hit and container would get killed.

Original post and answer: https://github.com/plasma-umass/scalene/discussions/618