I'm allocating a rather large, roughly 100GB, chunk of memory. The exact size is always known at compile time.
Should I be allocating statically?
static char data[DATA_SIZE];
Or using mmap?
data = mmap(NULL, DATA_SIZE, PROT_READ|PROT_WRITE, MAP_ANONYMOUS|MAP_PRIVATE|MAP_LOCKED|MAP_UNINITIALIZED, -1, 0)
With the former, the application (ignoring start-up time) seems to be running marginally faster.
Ignoring failed allocations, what are the pros and cons of each approach?
I would use
mmap
ormalloc
, simply because the failure case is easier to handle (and you could at least give a meaningful error message). With a static data, the execve(2) of your program would fail (and the shell trying it would give a not very useful message).However, I would also perhaps test (maybe by parsing
/proc/meminfo
) that the underlying system has enough memory resources.At last, without knowing why you need so much data, it smells quite bad. Are you sure you cannot do otherwise? If you really need 100Gbytes, you can only run on very large (and costly) machines.
Don't expect the virtual memory subsystem to handle that nicely by paging. Thrashing would be so important to make the computer unusable. Or consider using madvise(2).
Unless you have access to a specialized supercomputer, it looks like a design mistake (current desktops have at most 32Gbytes).