As title, I wonder how fsutil in windows can create a really large file so fast. Does it really allocate real cluster for that file or it just writes down file's metadata? Consider two commands below:
fsutil file createnew testFile <1Tb>
dd if=/dev/zero of=testFile bs=1024M count=1024
So I create a file with 1Tb size, the problem is with fsutil, the file is nearly created immediately, but with dd, it took over 1 hour to complete. Therefore, I guess that fsutil only writes metadata to the file header, the real file cluster will expand whenever needed. Do I think right?
from Microsoft documentation
from here, you can say that the file must be all zeros ([...]with content that consists of zeroes)
but, if this is truth
I think that you are right: probably, fsutil creates the file with bytes marked as free at the time of execution, but doesn't write those bytes
When you use dd like this
you are actually writing, "byte by byte", zeros in each byte of the new file
You can do this:
and then, you can see the contents of testFile_fsutil on any hexeditor, and looking for non-zero bytes, or, more precisely, from Linux you can do ( 1099511627776 bytes = 1 Tebibyte ):
or
or even using hashes:
return
so,
must return the same.
Note: to prove your point much more quickly, you can use a much smaller file for test.