I've been struggling with this simple problem for too long, so I thought I'd ask for help. I am trying to read a list of journal articles from National Library of Medicine ftp site into Python 3.3.2 (on Windows 7). The journal articles are in a .csv file.
I have tried the following code:
import csv
import urllib.request
url = "ftp://ftp.ncbi.nlm.nih.gov/pub/pmc/file_list.csv"
ftpstream = urllib.request.urlopen(url)
csvfile = csv.reader(ftpstream)
data = [row for row in csvfile]
It results in the following error:
Traceback (most recent call last):
File "<pyshell#4>", line 1, in <module>
data = [row for row in csvfile]
File "<pyshell#4>", line 1, in <listcomp>
data = [row for row in csvfile]
_csv.Error: iterator should return strings, not bytes (did you open the file in text mode?)
I presume I should be working with strings not bytes? Any help with the simple problem, and an explanation as to what is going wrong would be greatly appreciated.
Even though there is already an accepted answer, I thought I'd add to the body of knowledge by showing how I achieved something similar using the
requests
package (which is sometimes seen as an alternative tourlib.request
).The basis of using
codecs.itercode()
to solve the original problem is still the same as in the accepted answer.Here we also see the use of streaming provided through the
requests
package in order to avoid having to load the entire file over the network into memory first (which could take long if the file is large).I thought it might be useful since it helped me, as I was using
requests
rather thanurllib.request
in Python 3.6.Some of the ideas (e.g using
closing()
) are picked from this similar postThe problem relies on
urllib
returning bytes. As a proof, you can try to download the csv file with your browser and opening it as a regular file and the problem is gone.A similar problem was addressed here.
It can be solved decoding bytes to strings with the appropriate encoding. For example:
The last line could also be:
data = list(csvfile)
which can be easier to read.By the way, since the csv file is very big, it can slow and memory-consuming. Maybe it would be preferable to use a generator.
EDIT: Using codecs as proposed by Steven Rumbalski so it's not necessary to read the whole file to decode. Memory consumption reduced and speed increased.
Note that the list is not created either for the same reason.
urlopen
will return aurllib.response.addinfourl
instance for an ftp request.At this point
ftpstream
is a file like object, using.read()
would return the contents howevercsv.reader
requires an iterable in this case:Defining a generator like so:
We can create our csv reader like so:
And with a url
The code:
Prints