Resuming interrupted wget of entire website without starting at the beginning

605 Views Asked by At

I have been downloading a website using this wget command:

wget \
 --recursive \
 --no-clobber \
 --page-requisites \
 --html-extension \
 --convert-links \
 --restrict-file-names=windows \
 --domains website.org \
 --wait=10 \
 --limit-rate=30K \
    www.domain.org/directory/something.html

I wanted to use the --wait and --limit-rate options to avoid overloading the website. My download was going fine, but 24 hours into it got interrupted. I thought I could resume just by using the --no-clobber option, but although wget is not overwriting the files it has already downloading, it is still waiting 10 seconds after checking each one.

Is there any way to make wget only wait if it actually has to download the file, thus making the checking process go faster until I get caught up to the point where I was? What would be the best way to do this?

Thanks.

0

There are 0 best solutions below