I've been trying to consume the Twitter Streaming API using Python Requests.
There's a simple example in the documentation:
import requests
import json
r = requests.post('https://stream.twitter.com/1/statuses/filter.json',
data={'track': 'requests'}, auth=('username', 'password'))
for line in r.iter_lines():
if line: # filter out keep-alive new lines
print json.loads(line)
When I execute this, the call to requests.post()
never returns. I've experimented and proved that it is definitely connecting to Twitter and receiving data from the API. However, instead of returning a response object, it just sits there consuming as much data as Twitter sends. Judging by the code above, I would expect requests.post()
to return a response object with an open connection to Twitter down which I could continue to receive realtime results.
(To prove it was receiving data, I connected to Twitter using the same credentials in another shell, whereupon Twitter closed the first connection, and the call returned the response object. The r.content
attribute contained all the backed up data received while the connection was open.)
The documentation makes no mention of any other steps required to cause requests.post
to return before consuming all the supplied data. Other people seem to be using similar code without encountering this problem, e.g. here.
I'm using:
- Python 2.7
- Ubuntu 11.04
- Requests 0.14.0
You need to switch off prefetching, which I think is a parameter that changed defaults:
UPDATE: In the latest
requests
framework, usestream
instead ofprefetch
:Ah, I found the answer by reading the code. At some point, a prefetch parameter was added to the post method (and other methods, I assume).
I just needed to add a
prefetch=False
kwarg torequests.post()
.