headless internet browser? [closed]

2018-12-31 21:25发布

I would like to do the following. Log into a website, click a couple of specific links, then click a download link. I'd like to run this as either a scheduled task on windows or cron job on Linux. I'm not picky about the language I use, but I'd like this to run with out putting a browser window up on the screen if possible.

14条回答
闭嘴吧你
2楼-- · 2018-12-31 21:49

You can use Watir with Ruby or Watin with mono.

查看更多
无与为乐者.
3楼-- · 2018-12-31 21:50

Here are a list of headless browsers that I know about:

  • HtmlUnit - Java. Custom browser engine. Limited JavaScript support/DOM emulated. Open source.
  • Ghost - Python only. WebKit-based. Full JavaScript support. Open source.
  • Twill - Python/command line. Custom browser engine. No JavaScript. Open source.
  • PhantomJS - Command line/all platforms. WebKit-based. Full JavaScript support. Open source.
  • Awesomium - C++/.NET/all platforms. Chromium-based. Full JavaScript support. Commercial/free.
  • SimpleBrowser - .NET 4/C#. Custom browser engine. No JavaScript support. Open source.
  • ZombieJS - Node.js. Custom browser engine. JavaScript support/emulated DOM. Open source. Based on jsdom.
  • EnvJS - JavaScript via Java/Rhino. Custom browser engine. JavaScript support/emulated DOM. Open source.
  • Watir-webdriver with headless gem - Ruby via WebDriver. Full JS Support via Browsers (Firefox/Chrome/Safari/IE).
  • Spynner - Python only. PyQT and WebKit.
  • jsdom - Node.js. Custom browser engine. Supports JS via emulated DOM. Open source.
  • TrifleJS - port of PhantomJS using MSIE (Trident) and V8. Open source.
  • ui4j - Pure Java 8 solution. A wrapper library around the JavaFx WebKit Engine incl. headless modes.
  • Chromium Embedded Framework - Full up-to-date embedded version of Chromium with off-screen rendering as needed. C/C++, with .NET wrappers (and other languages). As it is Chromium, it has support for everything. BSD licensed.
  • Selenium WebDriver - Full support for JavaScript via browsers (Firefox, IE, Chrome, Safari, Opera). Officially supported bindings are C#, Java, JavaScript, Haskell, Perl, Ruby, PHP, Python, Objective-C, and R. Unofficial bindings are available for Qt and Go. Open source.

Headless browsers that have JavaScript support via an emulated DOM generally have issues with some sites that use more advanced/obscure browser features, or have functionality that has visual dependencies (e.g. via CSS positions and so forth), so whilst the pure JavaScript support in these browsers is generally complete, the actual supported browser functionality should be considered as partial only.

(Note: Original version of this post only mentioned HtmlUnit, hence the comments. If you know of other headless browser implementations and have edit rights, feel free to edit this post and add them.)

查看更多
无与为乐者.
4楼-- · 2018-12-31 21:51

If the links are known (e.g, you don't have to search the page for them), then you can probably use wget. I believe that it will do the state management across multiple fetches.

If you are a little more enterprising, then I would delve into the new goodies in Python 3.0. They redid the interface to their HTTP stack and, IMHO, have a very nice interface that is susceptible to this type of scripting.

查看更多
旧人旧事旧时光
5楼-- · 2018-12-31 21:53

libCURL could be used to create something like this.

查看更多
像晚风撩人
6楼-- · 2018-12-31 21:59

Can you not just use a download manager?

There's better ones, but FlashGet has browser-integration, and supports authentication. You can login, click a bunch of links and queue them up and schedule the download.

You could write something that, say, acts as a proxy which catches specific links and queues them for later download, or a Javascript bookmarklet that modifies links to go to "http://localhost:1234/download_queuer?url=" + $link.href and have that queue the downloads - but you'd be reinventing the download-manager-wheel, and with authentication it can be more complicated..

Or, if you want the "login, click links" bit to be automated also - look into screen-scraping.. Basically you load the page via a HTTP library, find the download links and download them..

Slightly simplified example, using Python:

import urllib
from BeautifulSoup import BeautifulSoup
src = urllib.urlopen("http://%s:%s@example.com" % ("username", "password"))
soup = BeautifulSoup(src)

for link_tag in soup.findAll("a"):
    link = link_tag["href"]
    filename = link.split("/")[-1] # get everything after last /
    urllib.urlretrieve(link, filename)

That would download every link on example.com, after authenticating with the username/password of "username" and "password". You could, of course, find more specific links using BeautifulSoup's HTML selector's (for example, you could find all links with the class "download", or URL's that start with http://cdn.example.com).

You could do the same in pretty much any language..

查看更多
泛滥B
7楼-- · 2018-12-31 22:02

I once did that using the Internet Explorer ActiveX control (WebBrowser, MSHTML). You can instantiate it without making it visible.

This can be done with any language which supports COM (Delphi, VB6, VB.net, C#, C++, ...)

Of course this is a quick-and-dirty solution and might not be appropriate in your situation.

查看更多
登录 后发表回答