首页 文章

当kibana尝试连接时,Elasticsearch连接被拒绝

提问于
浏览
1

我正在尝试使用docker容器运行ELK堆栈 . 但我得到的错误是kibana无法与elasticsearch Build 联系 .

kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["error","elasticsearch","admin"],"pid":12,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:console@5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:elasticsearch@5.6.9","error"],"pid":12,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:metrics@5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1  | [2018-06-22T19:31:38,182][INFO ][o.e.d.DiscoveryModule    ] [g8HPieb] using discovery type [zen]
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["status","plugin:timelion@5.6.9","info"],"pid":12,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["listening","info"],"pid":12,"message":"Server running at http://0.0.0.0:5601"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:38Z","tags":["status","ui settings","error"],"pid":12,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1  | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node               ] initialized
elasticsearch_1  | [2018-06-22T19:31:38,634][INFO ][o.e.n.Node               ] [g8HPieb] starting ...
elasticsearch_1  | [2018-06-22T19:31:38,767][INFO ][o.e.t.TransportService   ] [g8HPieb] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
elasticsearch_1  | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks    ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
logstash_1       | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1       | log4j:WARN Please initialize the log4j system properly.
logstash_1       | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
**logstash_1       | {:timestamp=>"2018-06-22T19:31:40.555000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}**
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-06-22T19:31:40Z","tags":["warning","elasticsearch","admin"],"pid":12,"message":"No living connections"}

这是我的docker-comp的内容

version: "2.0"

services:

  logstash:
    image: logstash:2
    ports:
      - "5044:5044"
    volumes:
      - ./:/config
    command: logstash -f /config/logstash.conf
    links:
      - elasticsearch
    depends_on:
      - elasticsearch

   elasticsearch:
     image: elasticsearch:5.6.9
     ports:
       - "9200:9200"
     volumes:
       - "./es_data/es_data:/usr/share/elasticsearch/data/"

   kibana:
     image: kibana:5
     ports:
       - "5601:5601"
     links:
       - elasticsearch
     environment:
       ELASTICSEARCH_URL: http://elasticsearch:9200
     depends_on:
       - elasticsearch

我的logstash.conf的内容

input { beats {      port => 5044    }  }

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
  stdout {
    codec => rubydebug
  }
}

我有弹性搜索容器和kibana容器卷曲,它对我来说很好看

{
  "name" : "g8HPieb",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "XxH0TcAmQcGqprf6s7TJEQ",
  "version" : {
    "number" : "5.6.9",
    "build_hash" : "877a590",
    "build_date" : "2018-04-12T16:25:14.838Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

curl localhost:9200 / _cat / indices?漂亮

yellow open .kibana GIBmXdlRQJmI67oq5r4oCg 1 1 1 0 3.2kb 3.2kb

After increasing virtual memory size

root @sfbp19:〜/ dockerizing-jenkins #sysctl -p vm.max_map_count = 262144 root @sfbp19:〜/ dockerizing-jenkins #docker-compose -f docker-compose-elk.yml up

Creating network "dockerizingjenkins_default" with the default driver
Creating dockerizingjenkins_elasticsearch_1
Creating dockerizingjenkins_logstash_1
Creating dockerizingjenkins_kibana_1
Attaching to dockerizingjenkins_elasticsearch_1, dockerizingjenkins_kibana_1, dockerizingjenkins_logstash_1
elasticsearch_1  | [2018-06-26T19:08:19,294][INFO ][o.e.n.Node               ] [] initializing ...
elasticsearch_1  | [2018-06-26T19:08:19,363][INFO ][o.e.e.NodeEnvironment    ] [PVmTsqv] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/sfbp19--vg-root)]], net usable_space [671.9gb], net total_space [789.2gb], spins? [possibly], types [ext4]
elasticsearch_1  | [2018-06-26T19:08:19,364][INFO ][o.e.e.NodeEnvironment    ] [PVmTsqv] heap size [1.9gb], compressed ordinary object pointers [true]
elasticsearch_1  | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node               ] node name [PVmTsqv] derived from node ID [PVmTsqv3QnyS3sQarPcJ-A]; set [node.name] to override
elasticsearch_1  | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node               ] version[5.6.9], pid[1], build[877a590/2018-04-12T16:25:14.838Z], OS[Linux/4.4.0-31-generic/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/1.8.0_171/25.171-b11]
elasticsearch_1  | [2018-06-26T19:08:19,369][INFO ][o.e.n.Node               ] JVM arguments [-Xms2g, -Xmx2g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -Dlog4j.skipJansi=true, -XX:+HeapDumpOnOutOfMemoryError, -Des.path.home=/usr/share/elasticsearch]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [aggs-matrix-stats]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [ingest-common]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [lang-expression]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [lang-groovy]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [lang-mustache]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [lang-painless]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [parent-join]
elasticsearch_1  | [2018-06-26T19:08:20,040][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [percolator]
elasticsearch_1  | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [reindex]
elasticsearch_1  | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [transport-netty3]
elasticsearch_1  | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService     ] [PVmTsqv] loaded module [transport-netty4]
elasticsearch_1  | [2018-06-26T19:08:20,041][INFO ][o.e.p.PluginsService     ] [PVmTsqv] no plugins loaded
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:kibana@5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch@5.6.9","info"],"pid":13,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["error","elasticsearch","admin"],"pid":13,"message":"Request error, retrying\nHEAD http://elasticsearch:9200/ => connect ECONNREFUSED 172.18.0.2:9200"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:console@5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:elasticsearch@5.6.9","error"],"pid":13,"state":"red","message":"Status changed from yellow to red - Unable to connect to Elasticsearch at http://elasticsearch:9200.","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:20Z","tags":["status","plugin:metrics@5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:21Z","tags":["status","plugin:timelion@5.6.9","info"],"pid":13,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:21Z","tags":["listening","info"],"pid":13,"message":"Server running at http://0.0.0.0:5601"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:21Z","tags":["status","ui settings","error"],"pid":13,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}
elasticsearch_1  | [2018-06-26T19:08:21,190][INFO ][o.e.d.DiscoveryModule    ] [PVmTsqv] using discovery type [zen]
elasticsearch_1  | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node               ] initialized
elasticsearch_1  | [2018-06-26T19:08:21,654][INFO ][o.e.n.Node               ] [PVmTsqv] starting ...
elasticsearch_1  | [2018-06-26T19:08:21,780][INFO ][o.e.t.TransportService   ] [PVmTsqv] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
logstash_1       | log4j:WARN No appenders could be found for logger (io.netty.util.internal.logging.InternalLoggerFactory).
logstash_1       | log4j:WARN Please initialize the log4j system properly.
logstash_1       | log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"Unable to revive connection: http://elasticsearch:9200/"}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:23Z","tags":["warning","elasticsearch","admin"],"pid":13,"message":"No living connections"}
logstash_1       | {:timestamp=>"2018-06-26T19:08:23.572000+0000", :message=>"Connection refused (Connection refused)", :class=>"Manticore::SocketException", :level=>:error}
logstash_1       | {:timestamp=>"2018-06-26T19:08:23.790000+0000", :message=>"Pipeline main started"}
elasticsearch_1  | [2018-06-26T19:08:24,837][INFO ][o.e.c.s.ClusterService   ] [PVmTsqv] new_master {PVmTsqv}{PVmTsqv3QnyS3sQarPcJ-A}{coD5A4HyR7-1MedSq8dFUQ}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-elected-as-master ([0] nodes joined)[, ]
elasticsearch_1  | [2018-06-26T19:08:24,869][INFO ][o.e.h.n.Netty4HttpServerTransport] [PVmTsqv] publish_address {172.18.0.2:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_1  | [2018-06-26T19:08:24,870][INFO ][o.e.n.Node               ] [PVmTsqv] started
elasticsearch_1  | [2018-06-26T19:08:24,989][INFO ][o.e.g.GatewayService     ] [PVmTsqv] recovered [1] indices into cluster_state
elasticsearch_1  | [2018-06-26T19:08:25,148][INFO ][o.e.c.r.a.AllocationService] [PVmTsqv] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[.kibana][0]] ...]).
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:26Z","tags":["status","plugin:elasticsearch@5.6.9","info"],"pid":13,"state":"green","message":"Status changed from red to green - Kibana index ready","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://elasticsearch:9200."}
kibana_1         | {"type":"log","@timestamp":"2018-06-26T19:08:26Z","tags":["status","ui settings","info"],"pid":13,"state":"green","message":"Status changed from red to green - Ready","prevState":"red","prevMsg":"Elasticsearch plugin is red"}

======================== filebeat.yml ======================= =======

filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /jenkins/gerrit_volume/logs/*_log
#============================= Filebeat modules ===============================
filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false
#==================== Elasticsearch template setting ==========================
setup.template.settings:
  index.number_of_shards: 3
#============================== Kibana =====================================
setup.kibana:
  #host: "localhost:5601"
#----------------------------- Logstash output --------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["10.1.9.69:5044"]

logging.level: debug

1 回答

  • 0

    看起来这是来自您的日志记录的弹性搜索问题,阻止ES初始化 . 这一行:

    elasticsearch_1  | [2018-06-22T19:31:38,776][WARN ][o.e.b.BootstrapChecks    ] [g8HPieb] max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
    

    您可以使用以下命令暂时将其提升:

    sysctl -w vm.max_map_count=262144
    

    或者通过将以下行添加到/etc/sysctl.conf并运行sysctl -p来永久设置它,如果您在实时实例上,则选择配置:

    vm.max_map_count=262144
    

    由于您在Docker容器中执行此操作,因此您可能希望在/etc/sysctl.conf中使用后一个选项 .

    参考:https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html

相关问题