Often times I want to automate http queries. I currently use Java(and commons http client), but would probably prefer a scripting based approach. Something really quick and simple. Where I can set a header, go to a page and not worry about setting up the entire OO lifecycle, setting each header, calling up an html parser... I am looking for a solution in ANY language, preferable scripting
问题:
回答1:
Have a look at Selenium. It generates code for C#, Java, Perl, PHP, Python, and Ruby if you need to customize the script.
回答2:
Watir sounds close to what you want although it (like Selenium linked to in another answer) actually opens up a browser to do stuff. You can see some examples here. Another browser based record + playback approach system is sahi.
If your application is uses WSGI, then paste is a nice option.
Mechanize linked to in another answer is a "browser in a library" and there are clones in perl, Ruby and Python. The Perl one is the original one and this seems to be the way to go if you don't want a browser. The problem with this approach is that all the front end code (which might rely on JavaScript), won't be exercised.
回答3:
Mechanize for Python seems easy to use: http://wwwsearch.sourceforge.net/mechanize/
回答4:
My turn : wget or perl with lwp. You'll find example on the linked page.
回答5:
If you have simple needs (fetch a page and then parse it), it is hard to beat LWP::Simple and HTML::TreeBuilder.
use strict;
use warnings;
use LWP::Simple;
use HTML::TreeBuilder;
my $url = 'http://www.example.com';
my $content = get( $url) or die "Couldn't get $url";
my $t = HTML::TreeBuilder->new_from_content( $content );
$t->eof;
$t->elementify;
# Get first match:
my $thing = $t->look_down( _tag => 'p', id => qr/match_this_regex/ );
print $thing ? $thing->as_text : "No match found\n";
# Get all matches:
my @things = $t->look_down( _tag => 'p', id => qr/match_this_regex/ );
print $_ ? $_->as_text : "No match found" for @things;
回答6:
I'm testing ReST APIs at the moment and found the ReST Client very nice. It's a GUI program, but nonetheless you can save and restore queries as XML files (or let them be generated), embed, write test scripts, and so on. And it's Java based (which is not an ad-hoc advantage, but you mentioned it).
Minus points for recording sessions. The ReST Client is good for stateless "one-shots".
If it doesn't suit your needs, I'd go for the already mentioned Mechanize (or WWW-Mechanize, as it is called at CPAN).
回答7:
Depending on exactly what you're doing the easiest solution looks to be bash + curl.
The man page for the latter is available here:
http://curl.haxx.se/docs/manpage.html
You can do posts as well as gets, HTTPS, show headers, work with cookies, basic and digest HTTP authentication, tunnel through all sorts of proxies, including NTLM on *nix amongst other things.
curl is also available as shared library with C and PHP support.
HTH
C.
回答8:
Python urllib may be what you're looking for.
Alternatively powershell exposes the full .NET http library in a scripting environment.
回答9:
Twill is pretty good and made for testing. It can be used as script, in an interactive session or within a Python program.
回答10:
Perl and WWW::Mechanize can make web scraping etc simple and easy, including easy handling of forms (let's say you want to go to a login page, fill in a username and password and submit the form, handling cookies / hidden session identifiers just as a browser would...)
Similarly, finding or extracting links from the fetched page is trivial.
If you need to parse stuff out of the resulting pages that WWW::Mechanize can't easily help with, then feed the result to HTML::TreeBuilder to make parsing easy.
回答11:
What about using PHP+Curl, or just bash?
回答12:
Some ruby libraries:
- httparty: really interesting, the philosophy is interesting.
- mechanize: classic good-quality web automatization library.
- scrubYt: puzzling at first glance but fun to use.