首页 文章

用cron运行scrapy crawl并保存到mongodb

提问于
浏览
0

我正在运行一个scrapy蜘蛛来使用cron job和mongodb来搜索网站 . 当我运行常规scrapy爬行时,它可以工作并保存到mongodb . 但是,当我使用cron运行它时,它不会保存到数据库 . 日志输出显示常规爬网结果,只显示它不保存到mongodb . 我在这里错过了什么?我的猜测是它与scrapy的环境有关,因为我可以在单个蜘蛛中使用mongo save()并获得所需的结果,但不是当我把它放入管道时 .

谢谢!

**crontab -e** 
PATH=/home/ubuntu/crawlers/env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
*/15 * * * * /home/ubuntu/crawlers/env/bin/python3 /home/ubuntu/crawlers/spider/evilscrapy/evilscrapy/run.py > /tmp/output

**pipeline**
class EvilscrapyPipeline(object):
    def __init__(self):
        connection = MongoClient(
            settings['MONGODB_SERVER'],
            settings['MONGODB_PORT']
        )
        db = connection[settings['MONGODB_DB']]
        self.collection = db[settings['MONGODB_COLLECTION']]

    def process_item(self,item,spider):      
        self.log_record(item)
        print(item)
        if item['url']:
                if self.collection.find( { "url": item['url'] } ).count() == 0:
                    if item['title']:
                        if item['content']:
                            item['timestamp']=datetime.datetime.now()
                            self.collection.insert(item)
        return item

在我的终端vs cron作业上运行'/ home / ubuntu / crawlers / env / bin / python3 /home/ubuntu/crawlers/spider/evilscrapy/evilscrapy/run.py> / tmp / output'的输出差异显示进程执行没有运行mongo db命令 .

具体来说,在link_spider内,日志在mongodb调用后停止:

lib_path = os.path.realpath(os.path.join(os.path.abspath(os.path.dirname(__file__)), '../../../', 'server'))
if lib_path not in sys.path:
    sys.path[0:0] = [lib_path]
from mongo import save_mongo, check_mongo

class LinkSpider(scrapy.Spider):

    def parse(self, response):
        ''' code to get urls to complete_list '''
        for url in complete_list:
            yield scrapy.Request(url=url, callback=self.parse)
            print "log"

        if check_mongo(url):
            print "log2"

日志似乎停在这里 .

我的mongo_connector文件:

import json
import os
import sys
from pymongo import MongoClient
from scrapy.conf import settings


def check_mongo(url):
    connection = MongoClient()
    db = connection[settings['MONGODB_DB']]
    collection = db[settings['MONGODB_COLLECTION']]
    if collection.find( { "url": url } ).count() != 0:
        return False
    else:
        return True

和设置:

MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = 'articles'
MONGODB_COLLECTION = 'articles_data'

mongod.log:

2017-05-01T21:12:40.926+0000 I CONTROL  [main] ***** SERVER RESTARTED *****
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] MongoDB starting : pid=4249 port=27017 dbpath=/var/lib/mongodb 64-bit host=ubuntu
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] db version v3.2.12
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] git version: ef3e1bc78e997f0d9f22f45aeb1d8e3b6ac14a14
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.2g  1 Mar 2016
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] modules: none
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] build environment:
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     distmod: ubuntu1604
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     distarch: x86_64
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2017-05-01T21:12:40.932+0000 I CONTROL  [initandlisten] options: { config: "/etc/mongod.conf", net: { bindIp: "127.0.0.1", port: 27017 }, storage: { dbPath: "/var/lib/mongo$
2017-05-01T21:12:40.961+0000 I -        [initandlisten] Detected data files in /var/lib/mongodb created by the 'wiredTiger' storage engine, so setting the active storage en$
2017-05-01T21:12:40.961+0000 I STORAGE  [initandlisten] wiredtiger_open config: create,cache_size=4G,session_max=20000,eviction=(threads_max=4),config_base=false,statistics$
2017-05-01T21:12:41.300+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/var/lib/mongodb/diagnostic.data'
2017-05-01T21:12:41.300+0000 I NETWORK  [HostnameCanonicalizationWorker] Starting hostname canonicalization worker
2017-05-01T21:12:41.301+0000 I NETWORK  [initandlisten] waiting for connections on port 27017
2017-05-02T19:52:06.590+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T19:52:06.590+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:08:58.458+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T20:21:39.076+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T21:33:09.651+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1
2017-05-02T22:01:53.036+0000 I COMMAND  [conn46674] killcursors: found 0 of 1

1 回答

  • 0

    你说得对,crontab启动的进程有自己的最小环境 . 在启动依赖于特定环境变量的复杂流程时,这通常会导致问题 .

    要修复这个尝试添加 . 在crontab中命令前面的$ HOME / .profile . 例如:

    PATH=/home/ubuntu/crawlers/env/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
    */15 * * * * $HOME/.profile; /home/ubuntu/crawlers/env/bin/python3 /home/ubuntu/crawlers/spider/evilscrapy/evilscrapy/run.py > /tmp/output
    

相关问题