URLConnection is not allowing me to access data on

2020-05-19 05:49发布

问题:

I am making a crawler, and need to get the data from the stream regardless if it is a 200 or not. CURL is doing it, as well as any standard browser.

The following will not actually get the content of the request, even though there is some, an exception is thrown with the http error status code. I want the output regardless, is there a way? I prefer to use this library as it will actually do persistent connections, which is perfect for the type of crawling I am doing.

package test;

import java.net.*;
import java.io.*;

public class Test {

    public static void main(String[] args) {

         try {

            URL url = new URL("http://github.com/XXXXXXXXXXXXXX");
            URLConnection connection = url.openConnection();

            DataInputStream inStream = new DataInputStream(connection.getInputStream());
            String inputLine;

            while ((inputLine = inStream.readLine()) != null) {
                System.out.println(inputLine);
            }
            inStream.close();
        } catch (MalformedURLException me) {
            System.err.println("MalformedURLException: " + me);
        } catch (IOException ioe) {
            System.err.println("IOException: " + ioe);
        }
    }
}

Worked, thanks: Here is what I came up with - just as a rough proof of concept:

import java.net.*;
import java.io.*;

public class Test {

    public static void main(String[] args) {
//InputStream error = ((HttpURLConnection) connection).getErrorStream();

        URL url = null;
        URLConnection connection = null;
        String inputLine = "";

        try {

            url = new URL("http://verelo.com/asdfrwdfgdg");
            connection = url.openConnection();

            DataInputStream inStream = new DataInputStream(connection.getInputStream());

            while ((inputLine = inStream.readLine()) != null) {
                System.out.println(inputLine);
            }
            inStream.close();
        } catch (MalformedURLException me) {
            System.err.println("MalformedURLException: " + me);
        } catch (IOException ioe) {
            System.err.println("IOException: " + ioe);

            InputStream error = ((HttpURLConnection) connection).getErrorStream();

            try {
                int data = error.read();
                while (data != -1) {
                    //do something with data...
                    //System.out.println(data);
                    inputLine = inputLine + (char)data;
                    data = error.read();
                    //inputLine = inputLine + (char)data;
                }
                error.close();
            } catch (Exception ex) {
                try {
                    if (error != null) {
                        error.close();
                    }
                } catch (Exception e) {

                }
            }
        }

        System.out.println(inputLine);
    }
}

回答1:

Simple:

URLConnection connection = url.openConnection();
InputStream is = connection.getInputStream();
if (connection instanceof HttpURLConnection) {
   HttpURLConnection httpConn = (HttpURLConnection) connection;
   int statusCode = httpConn.getResponseCode();
   if (statusCode != 200 /* or statusCode >= 200 && statusCode < 300 */) {
     is = httpConn.getErrorStream();
   }
}

You can refer to Javadoc for explanation. The best way I would handle this is as follows:

URLConnection connection = url.openConnection();
InputStream is = null;
try {
    is = connection.getInputStream();
} catch (IOException ioe) {
    if (connection instanceof HttpURLConnection) {
        HttpURLConnection httpConn = (HttpURLConnection) connection;
        int statusCode = httpConn.getResponseCode();
        if (statusCode != 200) {
            is = httpConn.getErrorStream();
        }
    }
}


回答2:

You need to do the following after calling openConnection.

  1. Cast the URLConnection to HttpURLConnection

  2. Call getResponseCode

  3. If the response is a success, use getInputStream, otherwise use getErrorStream

(The test for success should be 200 <= code < 300 because there are valid HTTP success codes apart from than 200.)


I am making a crawler, and need to get the data from the stream regardless if it is a 200 or not.

Just be aware that it if the code is a 4xx or 5xx, then the "data" is likely to be an error page of some kind.


The final point that should be made is that you should always respect the "robots.txt" file ... and read the Terms of Service before crawling / scraping the content of a site whose owners might care. Simply blatting off GET requests is likely to annoy site owners ... unless you've already come to some sort of "arrangement" with them.