首页 文章

Scrapy Crawler没有关注链接

提问于
浏览
2

我正在编写一个Scrapy爬虫来从属性网站抓取信息, https://www.iproperty.com.sg/sale/?page=1https://www.iproperty.com.sg/sale/?page=2 等 . 这个想法是,对于每一行,从该行获取信息并向该行的链接发出请求以获取更多信息 . 一旦处理了该页面上的所有行,请转到下一页并重复:

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from property.items import PropertyItem


class IpropCrawlerSpider(CrawlSpider):
    name = 'iprop_crawler'
    allowed_domains = ['www.iproperty.com.sg']
    start_urls = ["https://www.iproperty.com.sg/sale/?page=1"]
    rules = (
        Rule(LinkExtractor(allow=r'sale\/\?page=[1-9]'), 
         callback='parse_item', follow=True),
    )

    def parse_item(self, response):

        prop_list_xpath = '//h3[@class="cgiArp"]'

        for prop in response.xpath(prop_list_xpath):
            item = PropertyItem()
            item['name'] = prop.xpath('./a/text()').extract_first()
            deep_uri = prop.xpath('./a/@href').extract_first()
            deep_url = 'https://www.iproperty.com.sg' + deep_uri
            request = scrapy.Request(deep_url, callback=self.parse_per_prop)
            request.meta['item'] = item
            yield request

    def parse_per_prop(self, response):
        item = response.meta['item']
        item['price'] = response\
             .xpath('//div[@class="property-price duzTnm"]/text()')\
             .extract_first()
        item['address'] = response\
             .xpath('//span[@class="property-address sale-default"]/text()')\
             .extract_first()
        item['property_type'] = response\
             .xpath('//div[@class="property-attr-propertyType cXGbLS"]' \
                    + '/div[2]/text()')\
             .extract_first()
        yield item

运行此爬网程序会导致数据无法抓取:

2018-11-09 01:53:58 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: property)
2018-11-09 01:53:58 [scrapy.utils.log] INFO: Versions: lxml 3.7.2.0, libxml2 2.9.4, cssselect 1.0.0, parsel 1.5.0, w3lib 1.17.0, Twisted 17.1.0, Python 3.6.1 |Anaconda custom (64-bit)| (default, Mar 22 2017, 19:54:23) - [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)], pyOpenSSL 16.2.0 (OpenSSL 1.0.2p  14 Aug 2018), cryptography 1.7.1, Platform Linux-4.18.16-arch1-1-ARCH-x86_64-with-arch
2018-11-09 01:53:58 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'property', 'DOWNLOAD_DELAY': 1, 'NEWSPIDER_MODULE': 'property.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['property.spiders']}
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-11-09 01:53:58 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-11-09 01:53:58 [scrapy.core.engine] INFO: Spider opened
2018-11-09 01:53:58 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-11-09 01:53:58 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6024
2018-11-09 01:53:58 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iproperty.com.sg/robots.txt> (referer: None)
2018-11-09 01:54:01 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.iproperty.com.sg/sale/?page=1> (referer: None)
2018-11-09 01:54:01 [scrapy.core.engine] INFO: Closing spider (finished)
2018-11-09 01:54:01 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 460,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 154841,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2018, 11, 8, 17, 54, 1, 224281),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'memusage/max': 47136768,
 'memusage/startup': 47136768,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2018, 11, 8, 17, 53, 58, 676635)}
2018-11-09 01:54:01 [scrapy.core.engine] INFO: Spider closed (finished)

如果我将 parse_item 更改为 parse_start_url ,则只会抓取第一页,但不会遵循以下链接:

2018-11-09 02:11:42 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 6195,
 'downloader/request_count': 20,
 'downloader/request_method_count/GET': 20,
 'downloader/response_bytes': 2433163,
 'downloader/response_count': 20,
 'downloader/response_status_count/200': 20,
 'finish_reason': 'shutdown',
 'finish_time': datetime.datetime(2018, 11, 8, 18, 11, 42, 430358),
 'item_scraped_count': 18,
 'log_count/DEBUG': 39,
 'log_count/INFO': 8,
 'memusage/max': 47132672,
 'memusage/startup': 47132672,
 'request_depth_max': 1,
 'response_received_count': 20,
 'scheduler/dequeued': 19,
 'scheduler/dequeued/memory': 19,
 'scheduler/enqueued': 21,
 'scheduler/enqueued/memory': 21,
 'start_time': datetime.datetime(2018, 11, 8, 18, 11, 18, 416991)}
2018-11-09 02:11:42 [scrapy.core.engine] INFO: Spider closed (shutdown)

我想就这个问题寻求启发,为什么我无法关注下一页的链接 .

2 回答

  • 2

    Scrapy documentation判断,看起来您将对 parse_item 方法的引用传递给规则的 callback 参数 . 但是,根据文档,此回调对提取的链接进行操作 . 这不是你想要的,因为你的功能需要运行Scrapy Response . 那么,你应该做的是使用 process_request 参数 . 在相关的说明中,我改变了你的正则表达式,因为你现在拥有它的方式它只适用于第1页到第9页

    rules = (
        Rule(LinkExtractor(allow = r'sale\/\?page=[1-9]\d*'), 
         process_request = 'parse_item', follow = True),
    )
    

    另外,您可能不应该将 Request 对象返回给Scrapy,而应该使用 scrapy.ItemItemLoader 来存储您的数据 .

  • 0

    所以我发现规则本身存在问题,不得不使用xpath选择器 .

相关问题