首页 文章

当使用xpath从表中提取元素时,Scrapy返回null输出

提问于
浏览
0

我一直试图 grab 这个有科罗拉多州油井细节的网站https://cogcc.state.co.us/cogis/FacilityDetail.asp?facid=12307555&type=WELL

Scrapy刮擦网站,并在我刮掉网站时返回网址,但是当我需要使用XPath(油井县)提取表格内的元素时,我得到的只是一个空输出,即[] .

对于我尝试在页面中访问的任何元素,都会发生这种情况 .

这是我的蜘蛛:

import scrapy
import json
class coloradoSpider(scrapy.Spider):
    name = "colorado"
    allowed_domains = ["cogcc.state.co.us"]
    start_urls = ["https://cogcc.state.co.us/cogis/ProductionWellMonthly.asp?APICounty=123&APISeq=07555&APIWB=00&Year=All"]
    def parse(self, response):
        url = response.url
        response.selector.remove_namespaces()
        variable = (response.xpath("/html/body/blockquote/font/font/table/tbody/tr[3]/th[3]").extract())
        print url, variable

这是输出:

2015-05-13 20:14:54+0530 [scrapy] INFO: Scrapy 0.24.6 started (bot: tutorial)
2015-05-13 20:14:54+0530 [scrapy] INFO: Optional features available: ssl, http11
2015-05-13 20:14:54+0530 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE'
: 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutor
ial'}
2015-05-13 20:14:54+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetCons
ole, CloseSpider, WebService, CoreStats, SpiderState
2015-05-13 20:14:55+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuth
Middleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, Def
aultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, Redirec
tMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-05-13 20:14:55+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMid
dleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddlew
are
2015-05-13 20:14:56+0530 [scrapy] INFO: Enabled item pipelines:
2015-05-13 20:14:56+0530 [colorado] INFO: Spider opened
2015-05-13 20:14:56+0530 [colorado] INFO: Crawled 0 pages (at 0 pages/min), scra
ped 0 items (at 0 items/min)
2015-05-13 20:14:56+0530 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6
023
2015-05-13 20:14:56+0530 [scrapy] DEBUG: Web service listening on 127.0.0.1:6080
2015-05-13 20:15:02+0530 [colorado] DEBUG: Crawled (200) <GET https://cogcc.stat
e.co.us/cogis/ProductionWellMonthly.asp?APICounty=123&APISeq=07555&APIWB=00&Year
=All> (referer: None)
https://cogcc.state.co.us/cogis/ProductionWellMonthly.asp?APICounty=123&APISeq=0
7555&APIWB=00&Year=All []
2015-05-13 20:15:02+0530 [colorado] INFO: Closing spider (finished)
2015-05-13 20:15:02+0530 [colorado] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 292,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 366770,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2015, 5, 13, 14, 45, 2, 349000),
         'log_count/DEBUG': 3,
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2015, 5, 13, 14, 44, 56, 77000)}
2015-05-13 20:15:02+0530 [colorado] INFO: Spider closed (finished)

如果我回到XPath上的几个节点,我得到一个输出,其中Scrapy以HTML格式返回表 .

谢谢!

2 回答

  • 1

    看起来像是一个xpath问题,在开发过程中这个站点中他们可能省略了 tbody 但是当浏览器浏览时会自动插入浏览器 . 您可以从here获取更多相关信息 .

    所以你需要在给定页面中的县值( WELD #123 )然后可能的 xpath

    In [20]: response.xpath('/html/body/font/table/tr[6]/td[2]//text()').extract()
    Out[20]: [u'WELD                               #123']
    
  • 0

    它看起来是一个xpath问题,也许试试这个

    //blockquote/font/font/table//tr/td[3]//text()

    我认为你不需要tbody标签 .

相关问题