In NLTK there is a nltk.download()
function to download the datasets that are comes with the NLP suite.
In sklearn, it talks about loading data sets (http://scikit-learn.org/stable/datasets/) and fetching datas from http://mldata.org/ but for the rest of the datasets, the instructions were to download from the source.
Where should I save the data that I've downloaded from the source? Are there any other steps after I save the data into the correct directory before I can call from my python code?
Is there an example of how to download e.g. the 20newsgroups
dataset?
I've pip installed sklearn and tried this but I got an IOError
. Most probably because I haven't downloaded the dataset from the source.
>>> from sklearn.datasets import fetch_20newsgroups
>>> fetch_20newsgroups(subset='train')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 207, in fetch_20newsgroups
cache_path=cache_path)
File "/usr/local/lib/python2.7/dist-packages/sklearn/datasets/twenty_newsgroups.py", line 89, in download_20newsgroups
tarfile.open(archive_path, "r:gz").extractall(path=target_dir)
File "/usr/lib/python2.7/tarfile.py", line 1678, in open
return func(name, filemode, fileobj, **kwargs)
File "/usr/lib/python2.7/tarfile.py", line 1727, in gzopen
**kwargs)
File "/usr/lib/python2.7/tarfile.py", line 1705, in taropen
return cls(name, mode, fileobj, **kwargs)
File "/usr/lib/python2.7/tarfile.py", line 1574, in __init__
self.firstmember = self.next()
File "/usr/lib/python2.7/tarfile.py", line 2334, in next
raise ReadError("empty file")
tarfile.ReadError: empty file