How to best develop web crawlers

2019-04-01 15:06发布

I am used to create some crawlers to compile information and as I come to a website I need the info I start a new crawler specific for that site, using shell scripts most of the time and sometime PHP.

The way I do is with a simple for to iterate for the page list, a wget do download it and sed, tr, awk or other utilities to clean the page and grab the specific info I need.

All the process takes some time depending on the site and more to download all pages. And I often steps into an AJAX site that complicates everything

I was wondering if there is better ways to do that, faster ways or even some applications or languages to help such work.

标签: web-crawler
2条回答
仙女界的扛把子
2楼-- · 2019-04-01 15:29

If you use python, Scrapy is great is easy to use.

查看更多
Bombasti
3楼-- · 2019-04-01 15:45

Using regular expressions for parsing content is a bad idea that has been covered in questions here countless times.

You should be parsing the document into a DOM tree and then you can pull out any hyperlinks, stylesheets, script files, images or other external links that you want and traverse them accordingly.

Many scripting languages have packages for getting Web pages (eg curl for PHP) and for parsing HTML (eg Beautiful Soup for Python). Go that route instead of the hackky solution of regular expression matching.

查看更多
登录 后发表回答