Read timeout using either urllib2 or any other htt

2019-01-06 16:38发布

I have code for reading an url like this:

from urllib2 import Request, urlopen
req = Request(url)
for key, val in headers.items():
    req.add_header(key, val)
res = urlopen(req, timeout = timeout)
# This line blocks
content = res.read()

The timeout works for the urlopen() call. But then the code gets to the res.read() call where I want to read the response data and the timeout isn't applied there. So the read call may hang almost forever waiting for data from the server. The only solution I've found is to use a signal to interrupt the read() which is not suitable for me since I'm using threads.

What other options are there? Is there a HTTP library for Python that handles read timeouts? I've looked at httplib2 and requests and they seem to suffer the same issue as above. I don't want to write my own nonblocking network code using the socket module because I think there should already be a library for this.

Update: None of the solutions below are doing it for me. You can see for yourself that setting the socket or urlopen timeout has no effect when downloading a large file:

from urllib2 import urlopen
url = 'http://iso.linuxquestions.org/download/388/7163/http/se.releases.ubuntu.com/ubuntu-12.04.3-desktop-i386.iso'
c = urlopen(url)
c.read()

At least on Windows with Python 2.7.3, the timeouts are being completely ignored.

8条回答
何必那么认真
2楼-- · 2019-01-06 16:52

Any asynchronous network library should allow to enforce the total timeout on any I/O operation e.g., here's gevent code example:

#!/usr/bin/env python2
import gevent
import gevent.monkey # $ pip install gevent
gevent.monkey.patch_all()

import urllib2

with gevent.Timeout(2): # enforce total timeout
    response = urllib2.urlopen('http://localhost:8000')
    encoding = response.headers.getparam('charset')
    print response.read().decode(encoding)

And here's asyncio equivalent:

#!/usr/bin/env python3.5
import asyncio
import aiohttp # $ pip install aiohttp

async def fetch_text(url):
    response = await aiohttp.get(url)
    return await response.text()

text = asyncio.get_event_loop().run_until_complete(
    asyncio.wait_for(fetch_text('http://localhost:8000'), timeout=2))
print(text)

The test http server is defined here.

查看更多
Anthone
3楼-- · 2019-01-06 16:54

Had the same issue with socket timeout on the read statement. What worked for me was putting both the urlopen and the read inside a try statement. Hope this helps!

查看更多
淡お忘
4楼-- · 2019-01-06 16:56

I'd expect this to be a common problem, and yet - no answers to be found anywhere... Just built a solution for this using timeout signal:

import urllib2
import socket

timeout = 10
socket.setdefaulttimeout(timeout)

import time
import signal

def timeout_catcher(signum, _):
    raise urllib2.URLError("Read timeout")

signal.signal(signal.SIGALRM, timeout_catcher)

def safe_read(url, timeout_time):
    signal.setitimer(signal.ITIMER_REAL, timeout_time)
    url = 'http://uberdns.eu'
    content = urllib2.urlopen(url, timeout=timeout_time).read()
    signal.setitimer(signal.ITIMER_REAL, 0)
    # you should also catch any exceptions going out of urlopen here,
    # set the timer to 0, and pass the exceptions on.

The credit for the signal part of the solution goes here btw: python timer mystery

查看更多
ら.Afraid
5楼-- · 2019-01-06 16:57

This isn't the behavior I see. I get a URLError when the call times out:

from urllib2 import Request, urlopen
req = Request('http://www.google.com')
res = urlopen(req,timeout=0.000001)
#  Traceback (most recent call last):
#  File "<stdin>", line 1, in <module>
#  ...
#  raise URLError(err)
#  urllib2.URLError: <urlopen error timed out>

Can't you catch this error and then avoid trying to read res? When I try to use res.read() after this I get NameError: name 'res' is not defined. Is something like this what you need:

try:
    res = urlopen(req,timeout=3.0)
except:           
    print 'Doh!'
finally:
    print 'yay!'
    print res.read()

I suppose the way to implement a timeout manually is via multiprocessing, no? If the job hasn't finished you can terminate it.

查看更多
6楼-- · 2019-01-06 16:58

pycurl.TIMEOUT option works for the whole request:

#!/usr/bin/env python3
"""Test that pycurl.TIMEOUT does limit the total request timeout."""
import sys
import pycurl

timeout = 2 #NOTE: it does limit both the total *connection* and *read* timeouts
c = pycurl.Curl()
c.setopt(pycurl.CONNECTTIMEOUT, timeout)
c.setopt(pycurl.TIMEOUT, timeout)
c.setopt(pycurl.WRITEFUNCTION, sys.stdout.buffer.write)
c.setopt(pycurl.HEADERFUNCTION, sys.stderr.buffer.write)
c.setopt(pycurl.NOSIGNAL, 1)
c.setopt(pycurl.URL, 'http://localhost:8000')
c.setopt(pycurl.HTTPGET, 1)
c.perform()

The code raises the timeout error in ~2 seconds. I've tested the total read timeout with the server that sends the response in multiple chunks with the time less than the timeout between chunks:

$ python -mslow_http_server 1

where slow_http_server.py:

#!/usr/bin/env python
"""Usage: python -mslow_http_server [<read_timeout>]

   Return an http response with *read_timeout* seconds between parts.
"""
import time
try:
    from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer, test
except ImportError: # Python 3
    from http.server import BaseHTTPRequestHandler, HTTPServer, test

def SlowRequestHandlerFactory(read_timeout):
    class HTTPRequestHandler(BaseHTTPRequestHandler):
        def do_GET(self):
            n = 5
            data = b'1\n'
            self.send_response(200)
            self.send_header("Content-type", "text/plain; charset=utf-8")
            self.send_header("Content-Length", n*len(data))
            self.end_headers()
            for i in range(n):
                self.wfile.write(data)
                self.wfile.flush()
                time.sleep(read_timeout)
    return HTTPRequestHandler

if __name__ == "__main__":
    import sys
    read_timeout = int(sys.argv[1]) if len(sys.argv) > 1 else 5
    test(HandlerClass=SlowRequestHandlerFactory(read_timeout),
         ServerClass=HTTPServer)

I've tested the total connection timeout with http://google.com:22222.

查看更多
仙女界的扛把子
7楼-- · 2019-01-06 17:01

One possible (imperfect) solution is to set the global socket timeout, explained in more detail here:

import socket
import urllib2

# timeout in seconds
socket.setdefaulttimeout(10)

# this call to urllib2.urlopen now uses the default timeout
# we have set in the socket module
req = urllib2.Request('http://www.voidspace.org.uk')
response = urllib2.urlopen(req)

However, this only works if you're willing to globally modify the timeout for all users of the socket module. I'm running the request from within a Celery task, so doing this would mess up timeouts for the Celery worker code itself.

I'd be happy to hear any other solutions...

查看更多
登录 后发表回答