I need to know how to create a scraper (in Java) to gather data from HTML pages and output to a database...do not have a clue where to start so any information you can give me on this would be great. Also, you can't be too basic or simple here...thanks :)
问题:
回答1:
First you need to get familiar with a HTML
DOM
parser in Java like JTidy. This will help you to extract the stuff you want from a HTML
file. Once you have the essential stuff, you can use JDBC
to put in the database
.
It might be tempting to use regular expression for this job. But don't. HTML is not a regular language so regex are not the way to go.
回答2:
I am running a scraper using JSoup I'm a noob yet found it to be very intuitive and easy to work with. It is also capable of parsing a wide range or sources html, XML, RSS, etc.
I experimented with htmlunit with little to no success.
回答3:
i successfully used lobo browser API in a project that scraped HTML pages. the lobo browser project offers a browser but you can also use the API behind it very easily. it will also execute javascript and if that javascript manipulates the DOM, then that will also be reflected in the DOM when you investigate the DOM. so, in short, the API allows you mimic a browser, you can also work with cookies and stuff.
now for getting the data out of the HTML, i would first transform the HTML to valid XHTML. you can use jtidy for this. since XHTML is valid XML, you can use XPath to retrieve the data you want very easily. if you try to write code that parses the data from the raw HTML, your code will become a mess quickly. therefore i'd use XPath.
Once you have the data, you can insert it into a DB with JDBC or maybe use Hibernate if you want to avoid writing too much SQL
回答4:
A HUGE percentage of websites are build on malformed HTML code.
It is essential that you use something like HtmlCleaner to clean up the source code that you want to parse.
Then you can successfully use XPath to extract Nodes and Regex to parse specific part of the strings you extracted from the page.
At least this is the technique I used.
You can use the xHtml that is returned from HtmlCleaner as a sort of Interface between your Application and the remote Page you're trying to parse. You should test against this and in the case the remote page changes you just have to extract the new xHtml cleaned by HtmlCleaner, re-adapt the XPath Queries to extract what you need and re-test your Application code against the new Interface.
In the case you want to create a MultiThreaded 'scraper' be aware that HtmlCleaner is not Thread Safe (refer my post here).
This post can give you an idea of how to parse a correctly formatted xHtml using XPath.
Good Luck! ;)
note: at the time I implemented my Scraper, HtmlCleaner did a better job in normalizing the pages I wanted to parse. In some cases jTidy was failing in doing the same job so I'd suggest you to give it a try
回答5:
Using JTidy you can scrap data from HTML. Then yoou can use JDBC.