I'd like to write a simple web spider or just use wget
to download pdf results from google scholar. That would actually be quite a spiffy way to get papers for research.
I have read the following pages on stackoverflow:
Crawl website using wget and limit total number of crawled links
How do web spiders differ from Wget's spider?
Downloading all PDF files from a website
How to download all files (but not HTML) from a website using wget?
The last page is probably the most inspirational of all. I did try using wget
as suggested on this.
My google scholar search result page is thus but nothing was downloaded.
Given that my level of understanding of webspiders is minimal, what should I do to make this possible? I do realize that writing a spider is perhaps very involved and is a project I may not want to undertake. If it is possible using wget
, that would be absolutely awesome.