Web Scraping with Scala [closed]

2020-05-13 15:10发布

问题:

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 7 years ago.

Just wondering if anyone knows of a web-scraping library that takes advantage of Scala's succinct syntax. So far, I've found Chafe, but this seems poorly-documented and maintained. I'm wondering if anyone out there has done scraping with Scala and has advice. (I'm trying to integrate into an existing Scala framework rather than use a scraper written in, say, Python.)

回答1:

First there is a plethora of HTML scraping libs in JVM all you need to do is pimp one of them (pimp my library pattern).

The four I have used are:

  • HtmlUnit - Will emulate the browser and even run Javascript
  • Jericho - Preserves formatting and ideal if you want to edit the scraped HTML
  • NekoHtml
  • JSoup -- does not work with Scala. Might work

I have used Selenium but never for scraping. Scala has a wrapper around selenium.

I would recommend pimping an existing Java library over some half baked Scala lib.



回答2:

I don't have a Scala-specific recommendation, but for the JVM in general I've had good success with:

  • JSoup You can CSS selectors to "scrape" the document. Really nice to work with.
  • Use Tagsoup to get your input HTML to XML, then use XML processors to "Scrape".

The Tagsoup route actually works quite well with Scala since Scala's built-in XML "dsl" is pretty concise (if you can forgive its perf issues and occasional API weirdness). Also, Tagsoup will handle nearly any garbage document you give it. It also has niceties like built-in understanding of many HTML entities that other SAXParsers will choke on as being undeclared.

tl;dr - JSoup + CSS selectors if possible, otherwise Tagsoup + scala XML. If slow is ok, tagsoup first, then jsoup the result.



回答3:

I'd recommend Goose: https://github.com/jiminoc/goose

It's not as general-use as you might need but if you are scraping article content from popular sites, it may work out of the box. It also provides a framework for you to work from if you want to extend their code to cover other sites.