Trying to move a MongoDB database with a little over 100 million documents. Moving it from server in AWS to server in GCP. Tried mongodump
- which worked, but mongorestore
keeps breaking with an error -
error running create command: 24: Too many open files
How can this be done?
Don't want to transfer by creating a script on AWS server to fetch each document and push to an API endpoint on GCP server because it will take too long.
Edit (adding more details)
Already tried setting ulimit -n
to unlimited. Doesn't work as GCP has a hardcoded limit that cannot be modified.
Looks like you are hitting the
ulimit
for your user. This is likely a function of some or all of the following:ulimit
(probably 256 or 1024 depending on the OS)mongorestore
can increase the concurrency thereby increasing the number of file handles which are open concurrentlyYou can address the number of open files allowed for your user by invoking
ulimit -n <some number>
to increase the limit for your current shell. The number you choose cannot exceed the hard limit configured on your host. You can also change the ulimit permanently, more details here. This is the root cause fix but it is possible that your ability to change theulimit
is constrained by AWS so you might want to look at reducing the concurrency of yourmongorestore
process by tweaking the following settings:If you have chosen values for these other than 1 then you could reduce the concurrency (and hence the number of concurrently open file handles) by setting them as follows:
Naturally, this will increase the run time of the restore process but it might allow you to sneak under the currently configured
ulimit
. Although, just to reiterate; the root cause fix is to increase theulimit
.