I am running into a numpy error numpy.core._exceptions.MemoryError in my code. I have plenty of available memory in my machine so this shouldn't be a problem.
(This is on a raspberry pi armv7l, 4GB)
$ free
total used free shared buff/cache available
Mem: 3748172 87636 3384520 8620 276016 3528836
Swap: 1048572 0 1048572
I have found this post which suggested that I should allow overcommit_memory in the kernel, and so I did:
$ cat /proc/sys/vm/overcommit_memory
1
Now when I try to run this example:
import numpy as np
arrays = [np.empty((18, 602, 640), dtype=np.float32) for i in range(200)]
I get the same error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Why is python (or numpy) behaving in that way and how can I get it to work?
EDIT: Answers to questions in replies:
This is a 32bit system (armv7l)
>>> sys.maxsize
2147483647
I printed the approximate size (according to the error message each iteration should be 26.5MiB) at which the example fails:
def allocate_arr(i):
print(i, i * 26.5)
return np.empty((18, 602, 640), dtype=np.float32)
arrays = [allocate_arr(i) for i in range(0, 200)]
The output shows that this fails below at around 3GB of RAM allocated:
1 26.5
2 53.0
3 79.5
...
111 2941.5
112 2968.0
113 2994.5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "<stdin>", line 1, in <listcomp>
File "<stdin>", line 3, in allocate_arr
numpy.core._exceptions.MemoryError: Unable to allocate 26.5 MiB for an array with shape (18, 602, 640) and data type float32
Is 3GB the limit? Is there a way to increase that? Also isn't this the point of overcommitting?
By default 32-bit Linux has a 3:1 user/kernel split. That is, of the 4 GB one can address with a 32-bit unsigned integer, 3 GB is reserved for the user space but 1 GB is reserved for kernel space. Thus, any single process can use at most 3 GB memory. The vm.overcommit setting is not related to this, that is about using more virtual memory than there is actual physical memory backing the virtual memory.
There used to be so-called 4G/4G support in the Linux kernel (not sure if these patches were ever mainlined?), allowing the full 4 GB to be used by the user space process and another 4 GB address space by the kernel, at the cost of worse performance (TLB flush at every syscall?). But AFAIU these features have bitrotted as everyone who's interested in using lots of memory has moved to 64-bit systems a long time ago.