The best way to inspect HTTP response headers with

2019-01-22 21:53发布

问题:

I need the best way to inspect HTTP response headers with Selenium. I looked around the Selenium docs and didn't see any straightforward way to do it. Help is highly appreciated.

回答1:

I've answered this question a couple times on StackOverflow. Search my previous answers to dig it up. The key that you have to write some custom Java code that extends ProxyHandler and SeleniumServer. You also need to use a release AFTER 1.0 beta 2.

As to the people who ask why you'd want to do this: there are a lot of reasons. In my case, we're testing an AJAX heavy app and when things go wrong, one of the first things we debug is the network wire. That helps us see if the AJAX call happened and, if so, what the response was. We're actually automated the collection of this info and capture it (along with a screenshot) with every Selenium test.



回答2:

captureNetworkTraffic() API in DefaultSelenium captures http request/response headers and you can access them in html/xml/plain format.

Here is sample code:

Selenium s = new DefaultSelenium(...);
s.start("captureNetworkTraffic=true");
s.open("http://www.google.com");
String xml = s.captureNetworkTraffic("xml"); // html, plain
s.stop();


回答3:

I would not use Selenium for this type of test and suggest that you solve a variety of testing issues with different tools. what we do is:

  • Use unit tests to test code: methods and classes

  • Integration tests to test how application components hang together

  • A Simple functional test framework like Canoo WebTest (or some equivalent) to assert things like Http cache headers, basic page structure, simple redirection and cookie setting / values

  • Bespoke tests to ensure validity of pages to W3C standards

  • JSunit to test Javascript classes and methods we created

  • Selenium to test UI functionality/behaviour and the integration of Javascript into those pages

Its worth spending time breaking out the responsibility of testing different aspects of the system using these different tools since using only Selenium can cause issues:

  • The bigger the suite, the slower they run. Indeed Selenium is inherently slower compared to the other tools mentioned
  • It handles behaviour/functional testing well but nevertheless XPaths can be brittle and may require increasing amounts of time and effort to maintain
  • Usually requires you setup 'as-if-real-life' data with your app to step through user scenarios (which can be messy and take a lot of time)

There are also some techniques - which you may or may not have come across - which you can use to make your Selenium tests more resilient.



回答4:

I came up with a workaround which uses an embedded proxy, courtesy of the Proxoid project.

Its lightweight, unlike practically every other alternative out there (like BrowserMob or even LittleProxy)

See the HOWTO, with code, here: http://www.supermind.org/blog/968/howto-collect-webdriver-http-request-and-response-headers



回答5:

What I did to handle this using Selenium (not Selenium RC) was to convert the HTML tests into JSP and then utilize Java where needed to read headers or do whatever stuff that JavaScript (selenium is just Javascript) couldn't do.

Perhaps you could give a few details about how you plan to use Selenium?



回答6:

It seems to me that it can be very useful to test HTTP response headers from selenium. Not in 100% of the cases, perhaps ... but there certainly are some. If you are checking out a sequence of pages, it seems like it would be useful to test some response headers as part of that testing (Content-Type and Pragma leap to mind).



回答7:

Well, i was hoping to find out whether Accept-Encoding property from HTTP head contains "gzip", because in our company, we compress CSS and JS files in our web application by gzip and we want to test it by Selenim after each commit.



回答8:

Read the session cookies from Selenium and then use a real HTTP library outside of Selenium to request the specific page.

Here is the Python code:

# get session cookies from Selenium
cookies = {}
for s_cookie in self.selenium.get_cookies():
    cookies[s_cookie["name"]]=s_cookie["value"]

# request the pdf using the cookies:
response = requests.get(self.full_url('/vms/business_unit/2002/operational_unit/200202/guest/40/bill/pdf/'), cookies = cookies)
self.assertEqual(response.headers["content-type"], "application/pdf")