Crawler data form website use Scrapy 1.5.0 - Pytho

2019-08-31 07:04发布

I try to crawler data form a website with Scrapy (1.5.0)- Python

Project directory :

stack/
    scrapy.cfg           

    stack/            
        __init__.py

        items.py          

        pipelines.py      

        settings.py       

        spiders/         
            __init__.py
              stack_spider.py

Here is my items.py

import scrapy

class StackItem(scrapy.Item):
    title = scrapy.Field()

and here is stack_spider.py

from scrapy import Spider
from scrapy.selector import Selector

from stack.items import StackItem

class StackSpider(Spider):
    name = "stack"
    allowed_domains = ["batdongsan.com.vn"]
    start_urls = [
        "https://batdongsan.com.vn/nha-dat-ban",
    ]

    def parse(self, response):
        questions = Selector(response).xpath('//div[@class="p-title"]/h3')

        for question in questions:
            item = StackItem()
            item['title'] = question.xpath(
                'a/text()').extract()[0]

            yield item

I don't know why i can't crawler the data, i really need your help. Thanks

标签: python scrapy
4条回答
Lonely孤独者°
2楼-- · 2019-08-31 07:36

Set User Agent

goto your scrapy projects settings.py

and paste this in,

USER_AGENT = 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36'
查看更多
做个烂人
3楼-- · 2019-08-31 07:49

If you just want to crawl the website and get the Source Code, this might help.

import urllib.request as req

def imLS():
    url = "https://batdongsan.com.vn/nha-dat-ban"
    data = req.Request(url)
    resp = req.urlopen(data)
    respData = resp.read()
    print(respData)
imLS()
查看更多
霸刀☆藐视天下
4楼-- · 2019-08-31 07:51

To parse each page you need to add a little bit code.

import re

from scrapy import Spider
from scrapy.selector import Selector

class StackSpider(Spider):
    name = "batdongsan"
    allowed_domains = ["<DOMAIN>"]
    start_urls = [
        "https://<DOMAIN>/nha-dat-ban",
    ]

    def parse(self, response):
        questions = Selector(response).xpath('//div[@class="p-title"]/h3')

        # This part of code collect only titles. You need to add more fields to be collected if you need.
        for question in questions:
            title = question.xpath(
                'a/text()').extract_first().strip()
            yield {'title': title}

        if not re.search(r'\d+', response.url):
            # Now we have to go th
            url_prefix = response.css('div.background-pager-right-controls a::attr(href)').extract_first()
            url_last = response.css('div.background-pager-right-controls a::attr(href)').extract()[-1]
            max = re.findall(r'\d+', url_last)[0]
            for n in range(2, int(max)+1):
                next_page = url_prefix + '/p' + str(n)
                yield response.follow(next_page, callback=self.parse)

Replace to your domain. Also I didn't use Item class in my code.

查看更多
爱情/是我丢掉的垃圾
5楼-- · 2019-08-31 07:55

found the answer: http://edmundmartin.com/random-user-agent-requests-python/ need set header to pass prevent crawl

查看更多
登录 后发表回答