首页 文章

使用蜘蛛爬虫的蜘蛛__init__中的参数

提问于
浏览
0

我正在尝试使用蜘蛛爬虫代码来获取一些房地产数据 . 但它一直给我这个错误:

回溯(最近一次调用最后一次):文件“//anaconda/lib/python2.7/site-packages/twisted/internet/defer.py”,第1301行,在_inlineCallbacks中结果= g.send(结果)文件“/ /anaconda/lib/python2.7/site-packages/scrapy/crawler.py“,第90行,在抓取six.reraise(* exc_info)文件”//anaconda/lib/python2.7/site-packages/scrapy/ crawler.py“,第71行,在抓取中self.spider = self._create_spider(* args,** kwargs)文件”//anaconda/lib/python2.7/site-packages/scrapy/crawler.py“,第94行,在_create_spider中返回self.spidercls.from_crawler(self,* args,** kwargs)文件“//anaconda/lib/python2.7/site-packages/scrapy/spiders/crawl.py”,第96行,来自from_crawler spider = super(CrawlSpider,cls).from_crawler(crawler,* args,** kwargs)文件“//anaconda/lib/python2.7/site-packages/scrapy/spiders/init.py”,第50行,来自from_crawler spider = cls(* args,** kwargs)TypeError:init()只需3个参数(给定1个)

以下是定义抓取工具的代码:

class RealestateSpider(scrapy.spiders.CrawlSpider):

    ###Real estate web crawler
    name = 'buyrentsold'
    allowed_domains = ['realestate.com.au']

    def __init__(self, command, search):
        search = re.sub(r'\s+', '+', re.sub(',+', '%2c', search)).lower()
        url = '/{0}/in-{{0}}{{{{0}}}}/list-{{{{1}}}}'.format(command)
        start_url = 'http://www.{0}{1}'
        start_url = start_url.format(
                self.allowed_domains[0], url.format(search)
        )
        self.start_urls = [start_url.format('', 1)]
        extractor = scrapy.linkextractors.sgml.SgmlLinkExtractor(
                allow=url.format(re.escape(search)).format('.*', '')
        )
        rule = scrapy.spiders.Rule(
                extractor, callback='parse_items', follow=True
        )
        self.rules = [rule]
        super(RealestateSpider, self).__init__()

    def parse_items(self, response):
        ###Parse a page of real estate listings
        hxs = scrapy.selector.HtmlXPathSelector(response)
        for i in hxs.select('//div[contains(@class, "listingInfo")]'):
            item = RealestateItem()
            path = 'div[contains(@class, "propertyStats")]//text()'
            item['price'] = i.select(path).extract()
            vcard = i.select('div[contains(@class, "vcard")]//a')
            item['address'] = vcard.select('text()').extract()
            url = vcard.select('@href').extract()
            if len(url) == 1:
                item['url'] = 'http://www.{0}{1}'.format(
                        self.allowed_domains[0], url[0]
                )
            features = i.select('dl')
            for field in ('bed', 'bath', 'car'):
                path = '(@class, "rui-icon-{0}")'.format(field)
                path = 'dt[contains{0}]'.format(path)
                path = '{0}/following-sibling::dd[1]'.format(path)
                path = '{0}/text()'.format(path)
                item[field] = features.select(path).extract() or 0
            yield item

这是erorr出现的时候:

crawler = scrapy.crawler.CrawlerProcess(scrapy.conf.settings)
sp=RealestateSpider(command, search)
crawler.crawl(sp)
crawler.start()

任何人都可以帮我解决这个问题吗?谢谢!

1 回答

  • 1

    crawler.crawl() 方法需要使用spider class 作为参数,在代码中提供了一个蜘蛛对象 .

    有几种方法可以做到这一点,但最直接的方法就是扩展蜘蛛类:

    class MySpider(Spider):
        command = None
        search = None
    
        def __init__(self):
            # do something with self.command and self.search
            super(RealestateSpider, self).__init__()
    

    然后:

    crawler = scrapy.crawler.CrawlerProcess(scrapy.conf.settings)
    class MySpider(RealestateSpider):
        command = 'foo'
        search = 'bar'
    crawler.crawl(MySpider)
    crawler.start()
    

相关问题