I only want the folder structure, but I couldn't figure out how with wget. Instead I am using this:
wget -R pdf,css,gif,txt,png -np -r http://example.com
Which should reject all the files after -R, but it seems to me wget still downloads the file, then deletes it.
Is there a better way to just get the folder structure?
TTP request sent, awaiting response... 200 OK Length: 136796 (134K) [application/x-download] Saving to: “example.com/file.pdf”
100%[=====================================>] 136,796 853K/s in 0.2s
2012-10-03 03:51:41 (853 KB/s) - “example.com/file.pdf” saved [136796/136796]
Removing example.com/file.pdf since it should be rejected.
If anyone was wondering this is for a client which they can tell me the structure but it's a hassle since their IT guy has to do it, so I wanted to just get it myself.
That appears to be how
wget
was designed to work. When performing recursive downloads, non-leaf files that match the reject list are still downloaded so they can be harvested for links, then deleted.From the in-code comments (recur.c):
We've had a run-in with this in a past project where we had to mirror an authenticated site and
wget
keeps hitting the logout pages even when it was meant to reject those URLs. We could not find any options to change the behaviour ofwget
.The solution we ended up with was to download, hack and build our own version of
wget
. There's probably a more elegant approach to this, but the quick fix we used was to add the following rules to the end of thedownload_child_p()
routine (modified to match your requirements):