首页 文章

Scrapy抓取简历不会爬行任何东西,只是完成

提问于
浏览
2

我使用CrawlSpider Derived类开始爬行,并使用Ctrl C暂停它 . 当我再次执行命令以恢复它时,它不会继续 .

我的开始和恢复命令:

scrapy crawl mycrawler -s JOBDIR=crawls/test5_mycrawl

Scrapy创建文件夹 . 权限是777 .

当我恢复爬行时,它只输出:

/home/adminuser/.virtualenvs/rg_harvest/lib/python2.7/site-packages/twisted/internet/_sslverify.py:184: UserWarning: You do not have the service_identity module installed. Please install it from <https://pypi.python.org/pypi/service_identity>. Without the service_identity module and a recent enough pyOpenSSL tosupport it, Twisted can perform only rudimentary TLS client hostnameverification.  Many valid certificate/hostname mappings may be rejected.
  verifyHostname, VerificationError = _selectVerifyImplementation()
2014-11-21 11:05:10-0500 [scrapy] INFO: Scrapy 0.24.4 started (bot: rg_harvest_scrapy)
2014-11-21 11:05:10-0500 [scrapy] INFO: Optional features available: ssl, http11, django
2014-11-21 11:05:10-0500 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'rg_harvest_scrapy.spiders', 'SPIDER_MODULES': ['rg_harvest_scrapy.spiders'], 'BOT_NAME': 'rg_harvest_scrapy'}
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-11-21 11:05:10-0500 [scrapy] INFO: Enabled item pipelines: ValidateMandatory, TypeConversion, ValidateRange, ValidateLogic, RestegourmetImagesPipeline, SaveToDB
2014-11-21 11:05:10-0500 [mycrawler] INFO: Spider opened
2014-11-21 11:05:10-0500 [mycrawler] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2014-11-21 11:05:10-0500 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Crawled (200) <GET http://eatsmarter.de/suche/rezepte> (referer: None)
2014-11-21 11:05:10-0500 [mycrawler] DEBUG: Filtered duplicate request: <GET http://eatsmarter.de/suche/rezepte?page=1> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Closing spider (finished)
2014-11-21 11:05:10-0500 [mycrawler] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 225,
     'downloader/request_count': 1,
     'downloader/request_method_count/GET': 1,
     'downloader/response_bytes': 19242,
     'downloader/response_count': 1,
     'downloader/response_status_count/200': 1,
     'dupefilter/filtered': 29,
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 733196),
     'log_count/DEBUG': 4,
     'log_count/INFO': 7,
     'request_depth_max': 1,
     'response_received_count': 1,
     'scheduler/dequeued': 1,
     'scheduler/dequeued/disk': 1,
     'scheduler/enqueued': 1,
     'scheduler/enqueued/disk': 1,
     'start_time': datetime.datetime(2014, 11, 21, 16, 5, 10, 528629)}

我有一个start_url . 这可能是原因吗?我的抓取工具使用一个start_url,然后通过带有LinkExtractor的规则跟踪分页,并通过特定的url格式调用解析项:

我的蜘蛛代码:

class MyCrawlSpiderBase(CrawlSpider):
    name = 'test_spider'

    testmode = True
    crawl_start = datetime.utcnow().isoformat()

    def __init__(self, testmode=True, urls=None, *args, **kwargs):        
        self.testmode = bool(int(testmode))
        super(MyCrawlSpiderBase, self).__init__(*args, **kwargs)        

    def parse_item(self, response):
        # Item Values
        l = MyItemLoader(RecipeItem(), response=response)

        l.replace_value('url', response.url)
        l.replace_value('crawl_start', self.crawl_start)

        return l.load_item()


class MyCrawlSpider(MyCrawlSpiderBase):
    name = 'example_de'
    allowed_domains = ['example.de']
    start_urls = [
        "http://example.de",

    ]

    rules = (
        Rule( 
            LinkExtractor( 
                allow=('/search/entry\?page=', )
            )
        ), 


        Rule(
            LinkExtractor(
                allow=('/entry/[0-9A-z\-]{3,}$', ),
            ), 
            callback='parse_item'
        ),
    )

    def parse_item(self, response):
        item = super(MyCrawlSpider, self).parse_item(response)

        l = MyItemLoader(item=item, response=response)

        l.replace_xpath("name", "//h1[@class='fn title']/text()")         

        (...)

        return l.load_item()

2 回答

  • 1

    如果单击Ctrl C两次(强制停止),则无法继续 . 单击Ctrl C一次并等待 .

  • 5

    由于您的网址始终相同,因此很可能会过滤请求 . 您可以通过两种方式解决此问题:

    • settings.py 文件中,添加以下行:
      DUPEFILTER_CLASS = 'scrapy.dupefilter.BaseDupeFilter'
      这将使用 BaseDupeFilter 替换默认的 RFPDupeFilter ,它不会过滤任何请求 . 如果您实际上想要过滤掉与此问题无关的其他一些请求,那么我就不是您想要的 .

    • 您可以更多地参与创建请求的过程,并使用参数 dont_filter=True 创建它们,这将禁用基于每个请求的过滤 . 要实现此目的,您可以删除 start_urls 并将其替换为可产生解析请求的方法 start_requests() . 查看official documentation中的更多信息 .

相关问题