Has curlftpfs a maximum size of mounted space and how can I skip it?

891 Views Asked by At

I mounted a ftpserver into my local OS:

curlftpfs user:[email protected] /var/test/

I noticed using pydf that there is maximal size of this volume at about 7.5GB:

Filesystem                                                            Size Used Avail Use%             Mounted on
curlftpfs#ftp://user:[email protected]                             7629G    0 7629G  0.0 [.........] /var/test

Then I tried to fill the disk space using dd with an 8GB file but this failed also at the given size:

dd if=/dev/zero of=upload_test bs=8000000000 count=1
dd: memory exhausted by input buffer of size 8000000000 bytes (7.5 GiB)

The FTP user has unlimited traffic and disk space at remote server. So my question is: Why is there a limit at 7.5GB and how can I skip it?

2

There are 2 best solutions below

0
On BEST ANSWER

Looking at the source code of curlftpfs 0.9.2, which is the last released version, this 7629G seems to be the hardcoded default.

In other words, the curlftpfs doesn't check the actual size of the remote filesystem and uses some predefined static value instead. Moreover the actual check can't be implemented because ftp protocol doesn't provide information about free space.

This means that failure of your file transfer on 7.5 GB is not caused by reported free space, as there is an order of magnitude difference between the two.

Details

Function ftpfs_statfs implementing statfs FUSE operation defines number of free blocks as follows:

buf->f_blocks = 999999999 * 2;

And the size of filesystem block as:

buf->f_bsize = ftpfs.blksize;

Which is defined elsewhere as:

ftpfs.blksize = 4096;

So putting this all together gives me 999999999 * 2 * 4096 / 2^30 GB ~= 7629.3945236206055 GB, which matches the number in your pydf output: 7629G.

0
On

It's an old question, however, for completeness sake:

DD's bs ("block size") option makes it buffer the specified amount of data in memory before writing out the chunk to the output. With a massive block size like your 8GB, it's entirely possible your system simply did not have the free memory (or even memory capacity!) to hold the entire buffer at once. Retrying with a smaller block size and appropriately higher count for the same output size should work as expected:

dd if=/dev/zero of=upload_test bs=8000000 count=1000