Advice with crawling web site content

2019-06-07 23:46发布

问题:

I was trying to crawl some of website content, using jsoup and java combination. Save the relevant details to my database and doing the same activity daily.

But here is the deal, when I open the website in browser I get rendered html (with all element tags out there). The javascript part when I test it, it works just fine (the one which I'm supposed to use to extract the correct data).

But when I do a parse/get with jsoup(from Java class), only the initial website is downloaded for parsing. Meaning there are some dynamic parts of a website and I want to get that data but since they're rendered post get, asynchronously on the website I'm unable to capture it with jsoup.

Does anybody knows a way around this? Am I using the right toolset? more experienced people, I bid your advice.

回答1:

You need to check before if the website you're crawling demands some of this list to show all contents:

  • Authentication with Login/Password
  • Some sort of session validation on HTTP headers
  • Cookies
  • Some sort of time delay to load all the contents (sites profuse on Javascript libraries, CSS and asyncronous data may need of this).
  • An specific User-Agent browser
  • A proxy password if, by example, you're inside a corporative network security configuration.

If anything on this list is needed, you can manage that data providing the parameters in your jsoup.connect(). Please refer the official doc.

http://jsoup.org/cookbook/input/load-document-from-url