首页 文章

如何使用Scrapy在页面的第二级抓取数据

提问于
浏览
1

我想使用scrapy spider从以下网站的所有帖子中获取数据(问题 Headers 内容和答案):

https://forums.att.com/t5/custom/page/page-id/latest-activity/category-id/Customer_Care/page/1?page-type=latest-solutions-topics

问题是我只是不知道如何首先按照帖子的链接,然后抓取所有15个帖子/网站的数据 .

{import scrapy

class ArticleSpider(scrapy.Spider):name = "post" start_urls = ['https://forums.att.com/t5/Data-Messaging-Features-Internet/Throttling-for-unlimited-data/m-p/4805201#M73235']

def parse(self, response):
    SET_SELECTOR = 'body'
    for post in response.css(SET_SELECTOR):

        # Selector for title, content and answer
        TITLE_SELECTOR = '.lia-message-subject h5 ::text'
        CONTENT_SELECTOR = '.lia-message-body-content'
        ANSWER_SELECTOR = '.lia-message-body-content'

        yield {

            # [0].extract() = extract_first()
            'Qtitle': post.css(TITLE_SELECTOR)[0].extract(),
            'Qcontent': post.css(CONTENT_SELECTOR)[0].extract(),
            'Answer': post.css(ANSWER_SELECTOR)[1].extract(),
        }
    # Running through all 173 pages
    NEXT_PAGE_SELECTOR = '.lia-paging-page-next a ::attr(href)'
    next_page = response.css(NEXT_PAGE_SELECTOR).extract_first()
    if next_page:
        yield scrapy.Request(
            response.urljoin(next_page),
            callback=self.parse
        )}

我希望你能帮助我 . 提前致谢!

1 回答

  • 1

    您需要添加一种用于抓取帖子内容的方法 . 您可以像这样重写您的蜘蛛代码(我使用xpath选择器):

    # -*- coding: utf-8 -*-
    import scrapy  
    
    class ArticleSpider(scrapy.Spider):
        name = "post"
        start_urls = ['https://forums.att.com/t5/custom/page/page-id/latest-activity/category-id/Customer_Care/page/1?page-type=latest-solutions-topics']
    
        def parse(self, response):
            for post_link in response.xpath('//h2//a/@href').extract():
                link = response.urljoin(post_link)
                yield scrapy.Request(link, callback=self.parse_post)
    
            # Checks if the main page has a link to next page if True keep parsing.
            next_page = response.xpath('(//a[@rel="next"])[1]/@href').extract_first()
            if next_page:
                yield scrapy.Request(next_page, callback=self.parse)
    
        def parse_post(self, response):
            # Scrape title, content from post.
            for post in response.xpath('//div[contains(@class, "lia-quilt-forum-message")]'):
                item = dict()
                item['title'] = post.xpath('.//h5/text()').extract_first()
                item['content'] = post.xpath('.//div[@class="lia-message-body-content"]//text()').extract()
                yield item
    
            # If the post page has a link to next page keep parsing.
            next_page = response.xpath('(//a[@rel="next"])[1]/@href').extract_first()
            if next_page:
                yield scrapy.Request(next_page, callback=self.parse_post)
    

    此代码解析主页面中的所有链接,并调用 parse _post 方法来抓取每个帖子内容 . parseparse_post 方法都检查是否存在下一个链接以及 True 是否继续抓取 .

相关问题