Actually it's a sample of scrapy tutorial in Extracting data of scrapy. Everything goes well until the sample of scrapy shell, when I type the command in Windows cmd:
scrapy shell 'http://quotes.toscrape.com/page/1/'
I got an exception like
twisted.internet.error.DNSLookupError: DNS lookup failed: address "'http:" not found: [Errno 11001] getaddrinfo failed.
Exception in thread Thread-1 (most likely raised during interpreter shutdown):
in detail it's like:
[
and I have searched the stackoverflow
and find a similar problem like question
and one answer is try another terminal,and I tried the terminal of Pycharm but it fails with the same exception.
PS: I work on windows and Python 2.7.12, Anaconda 4.0.0 (64-bit)
I'm quite new to scrapy so any help is appreciated, thank you.
Well,it may be related to the quotation, I tried to use
"
to enclose the urls and it works, I do not know if this command differs in different OS since the original tutorial commmand code use the'
to enclose the urls.I also post this issue on the scrapy and as @kmike said, it works well with
'
on other OS like (MAC and Linux or Unix) (github)I had the same problem and removing the single quotes around the url worked for me. I'm on windows, python 3.6
For anyone finding this question with the same error for a local .html files, I found I had to prefix the filename with the current folder and not just supply the filename.
Using
results in the error
however, using
launches the shell and loads the file.
Although the format of the file path is specified in the docs, I didn't realise it would be required (I assumed I would not have to supply
./
for a local file) but the docs do give examples to follow.