首页 文章

使用Scrapy时获取twisted.defer.CancelledError

提问于
浏览
0

每当我运行scrapy crawl命令时,以下错误都会出现:

2016-03-12 00:16:56 [scrapy] ERROR: Error downloading <GET http://XXXXXXX/rnd/sites/default/files/Agreement%20of%20FFCCA(1).pdf>
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 246, in _cb_bodyready
    raise defer.CancelledError()
CancelledError
2016-03-12 00:16:56 [scrapy] ERROR: Error downloading <GET http://XXXXXX/rnd/sites/default/files/S&P_Chemicals,etc.20150903.doc>
Traceback (most recent call last):
  File "/usr/lib/python2.7/dist-packages/twisted/internet/defer.py", line 577, in _runCallbacks
    current.result = callback(current.result, *args, **kw)
  File "/usr/local/lib/python2.7/dist-packages/scrapy/core/downloader/handlers/http11.py", line 246, in _cb_bodyready
    raise defer.CancelledError()
CancelledError

我试过在互联网上搜索这个错误,但没有任何好处 .

我的抓取代码如下:

import os
import StringIO
import sys
import scrapy
from scrapy.conf import settings
from scrapy.selector import Selector
from scrapy.linkextractors import LinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule

class IntSpider(CrawlSpider):
    name = "intranetspidey"
    allowed_domains = ["*****"]
    start_urls = [
        "******"
    ]
    rules = (
        Rule(LinkExtractor(deny_extensions=["ppt","pptx"],deny=(r'.*\?.*') ),
             follow=True,
             callback='parse_webpage'),
    )


    def get_pdf_text(self, response):
        """ Peek inside PDF to check possible violations.
        @return: PDF content as searcable plain-text string
        """
        try:
                from pyPdf import PdfFileReader
        except ImportError:
                print "Needed: easy_install pyPdf"
                raise 
        stream = StringIO.StringIO(response.body)
        reader = PdfFileReader(stream)
        text = u""

        if reader.getDocumentInfo().title:
                # Title is optional, may be None
                text += reader.getDocumentInfo().title

        for page in reader.pages:
                # XXX: Does handle unicode properly?
                text += page.extractText()

        return text 

    def parse_webpage(self, response):

        ct = response.headers.get("content-type", "").lower()
        if "pdf" in ct or ".pdf" in response.url:
            data = self.get_pdf_text(response)

        elif "html" in ct:
              do something

我刚刚开始使用Scrapy,我将非常感谢您的知识渊博的解决方案 .

2 回答

  • 0

    啊 - 简单! :)

    只需打开the source code,其中错误被抛出...似乎页面超过 maxsize ...这导致我们here .

    所以,问题在于你正在尝试获取大型文档 . 增加设置中的 DOWNLOAD_MAXSIZE 限制,你应该没问题 .

    注意:您的性能会受到影响,因为您阻止CPU进行PDF解码,而这种情况不会发出进一步的请求 . Scrapy的架构严格是单线程的 . 以下是两个(众多)解决方案:

    a)使用file pipeline下载文件,然后使用其他系统批处理它们 .

    b)使用 reactor.spawnProcess() 并使用单独的进程进行PDF解码 . (see here) . 这允许您使用Python或任何其他命令行工具来进行PDF解码 .

  • 0

    你在输出/日志中得到这样的一行:

    Expected response size X larger than download max size Y.
    

    听起来您要求的响应超过1GB . 您的错误来自download handler defaults to one gig,但可以轻松地在overridden中:

相关问题