首页 文章

如何使用Elastic Search修复群集运行状况黄色

提问于
浏览
16

我在服务器上设置了MongoDb和ElasticSearch . 使用https://github.com/richardwilly98/elasticsearch-river-mongodb我已将ElasticSearch和MongoDb连接在一起 .

我用以下方法创建一个新索引:

curl -XPUT 'http://127.0.0.1:9200/_river/mongodb/_meta' -d '{ 
        "type": "mongodb", 
        "mongodb": { 
        "db": "my_database", 
        "collection": "my_collection"
    }, 
        "index": {
        "name": "mainindex", 
        "type": "string",
        "bulk": {
            "concurrent_requests": 1
        }
    }
}'

执行命令后,我转到http://x.x.x.x:9200/_plugin/head/,我看到消息:cluster health:yellow(1,6) .

cluster health: yellow (1, 6)

enter image description here

3 回答

  • 22

    我正在寻找类似问题的解决方案,因为我在回答这个问题时发布了答案 .

    我有一个开发环境,我只需要1个Elasticsearch节点,因此不需要为我的所有索引提供副本 .

    要解决 cluster health: yellow 问题,我刚刚发了 index.number_of_replicas: 1

    PUT /_settings
    {
      "index" : {
            "number_of_replicas" : 0
        }
    }
    
  • 9

    通过在弹性搜索配置中具有相同的群集名称来配置群集

    你可能使用的默认elasticsearch.yml在Beginning中有这些设置,如下所示:

    ################################### Cluster ###################################
    
    # Cluster name identifies your cluster for auto-discovery. If you're running
    # multiple clusters on the same network, make sure you're using unique names.
    #
    # cluster.name: elasticsearch
    
    
    #################################### Node #####################################
    
    # Node names are generated dynamically on startup, so you're relieved
    # from configuring them manually. You can tie this node to a specific name:
    #
    # node.name: "Franz Kafka"
    

    在这里你需要配置一个唯一的

    cluster.name: "MainCluster"

    并为每台机器和/或实例提供不同的唯一

    node.name: "LocalMachine1"

    您现在需要将此 elasticsearch.yml 复制到另一台计算机(在同一网络中),或者复制到另一台计算机上 . elasticsearch_2.yml 编辑它:

    node.name: "LocalMachine2"

    并且您的群集已准备就绪

    如果没有配置,弹性搜索将使用随机漫威字符(根据文档为3000),所以不要改变 node.name 也应该没问题

    要在同一台计算机上运行两个节点,您必须进行配置,例如 elasticsearch_2.yml 复制,上面有更改 . 您还必须拥有数据和日志路径的副本,例如(自制特定路径:)

    cp -r /usr/local/var/elasticsearch /usr/local/var/elasticsearch_2
    cp -r /usr/local/var/log/elasticsearch /usr/local/var/log/elasticsearch_2
    

    可能看起来像

    #################################### Paths ####################################
    
    # Path to directory containing configuration (this file and logging.yml):
    #
    # path.conf: /path/to/conf
    
    # Path to directory where to store index data allocated for this node.
    #
    path.data: /usr/local/var/elasticsearch_2/
    #
    # Can optionally include more than one location, causing data to be striped across
    # the locations (a la RAID 0) on a file level, favouring locations with most free
    # space on creation. For example:
    #
    # path.data: /path/to/data1,/path/to/data2
    
    # Path to temporary files:
    #
    # path.work: /path/to/work
    
    # Path to log files:
    #
    path.logs: /usr/local/var/log/elasticsearch_2/
    

    确保您没有在localhost环回设备上运行elasicsearch

    127.0.0.1

    只是评论它,如果它不是(自制软件补丁就是这样)

    ############################## Network And HTTP ###############################
    
    # Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
    # on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
    # communication. (the range means that if the port is busy, it will automatically
    # try the next port).
    
    # Set the bind address specifically (IPv4 or IPv6):
    #
    # network.bind_host: 192.168.0.1
    
    # Set the address other nodes will use to communicate with this node. If not
    # set, it is automatically derived. It must point to an actual IP address.
    #
    # network.publish_host: 192.168.0.1
    
    # Set both 'bind_host' and 'publish_host':
    #
    # network.host: 127.0.0.1
    

    现在你可以像这样开始弹性搜索:

    bin/elasticsearch -D es.config=/usr/local/Cellar/elasticsearch/1.0.0.RC1/config/elasticsearch.yml
    

    对于第一个节点和主节点(因为首先启动)

    然后

    bin/elasticsearch -D es.config=/usr/local/Cellar/elasticsearch/1.0.0.RC1/config/elasticsearch_2.yml
    

    现在你应该有2个节点在运行

  • 2

    看起来您没有要复制碎片的节点 . 您可以将副本计数降低到0或将第二个节点添加到群集,以便可以将主分片和副本分片安全地放在不同的节点上 .

    elasticsearch.yml中的默认配置可能是这样的:

    index.number_of_shards: 5  
    index.number_of_replicas: 1
    

    我们的想法是,如果您的节点崩溃,群集中的另一个节点将拥有该分片的副本 . 由于您只有一个节点,因此群集不知道将副本放在何处,因此处于黄色状态 .

相关问题