Kibana not communicating with Elasticsearch

Hello Community,

I am getting an error

`kibana                  | {"type":"log","@timestamp":"2018-07-19T05:54:47Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: https://172.16.10.181:9200/"} .  `
kibana                  | {"type":"log","@timestamp":"2018-07-19T05:54:47Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}

My Elasticsearch is up and running, here is the result,

[root@ELK elasticsearch_certs]# curl -XGET -u elastic --cacert ca.crt https://localhost:9200/_cluster/health?prettyEnter 
host password for user 'elastic': { "cluster_name" : "test-cluster",
  "status" : "green",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 4,
  "active_shards" : 4,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 0,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 100.0
}

Here is my elasticsearch.yml file

node.name: elk-test1
node.master: true
node.data: true
node.ingest: true
cluster.name: "test-cluster"
network.host: 0.0.0.0
network.bind_host: 0.0.0.0
network.publish_host: 172.16.10.181
bootstrap.memory_lock: true
xpack.license.self_generated.type: trial
xpack.monitoring.collection.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.supported_protocols: TLSv1.2
xpack.ssl.certificate_authorities: /usr/share/elasticsearch/config/x-pack/certificates/ca.crt
xpack.ssl.certificate: /usr/share/elasticsearch/config/x-pack/certificates/elk-test1.crt
xpack.ssl.key: /usr/share/elasticsearch/config/x-pack/certificates/elk-test1.key

My Kibana.yml file is

`server.host: "0"
server.name: kibana
elasticsearch.url: "https://172.16.10.181:9200"
elasticsearch.username: elastic
elasticsearch.password: <password>
server.ssl.enabled: true
server.ssl.key: /etc/kibana.key
server.ssl.certificate: /etc/kibana.crt
elasticsearch.ssl.certificateAuthorities: /etc/ca.crt
xpack.monitoring.ui.container.elasticsearch.enabled: true
xpack.security.encryptionKey: "<32 character key>"
xpack.monitoring.collection.enabled: true`

And, my elasticsearch logs here,

`And, finally My elasticsearch logs here, 

    `elasticsearch_master    | Created elasticsearch keystore in /usr/share/elasticsearch/config
elasticsearch_master    | OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
elasticsearch_master    | [2018-07-19T06:18:45,900][INFO ][o.e.n.Node               ] [elk-test1] initializing ...
elasticsearch_master    | [2018-07-19T06:18:45,963][INFO ][o.e.e.NodeEnvironment    ] [elk-test1] using [1] data paths, mounts [[/usr/share/elasticsearch/data (/dev/mapper/centos-root)]], net usable_space [29.8gb], net total_space [34.9gb], types [xfs]
elasticsearch_master    | [2018-07-19T06:18:45,963][INFO ][o.e.e.NodeEnvironment    ] [elk-test1] heap size [494.9mb], compressed ordinary object pointers [true]
elasticsearch_master    | [2018-07-19T06:18:45,980][INFO ][o.e.n.Node               ] [elk-test1] node name [elk-test1], node ID [QA8gJuhmQIuP69CHsH0Lig]
elasticsearch_master    | [2018-07-19T06:18:45,981][INFO ][o.e.n.Node               ] [elk-test1] version[6.3.1], pid[1], build[default/tar/eb782d0/2018-06-29T21:59:26.107521Z], OS[Linux/3.10.0-862.el7.x86_64/amd64], JVM[Oracle Corporation/OpenJDK 64-Bit Server VM/10.0.1/10.0.1+10]

elasticsearch_master    | [2018-07-19T06:18:51,227][INFO ][o.e.x.s.a.s.FileRolesStore] [elk-test1] parsed [0] roles from file [/usr/share/elasticsearch/config/roles.yml]
elasticsearch_master    | [2018-07-19T06:18:51,766][INFO ][o.e.x.m.j.p.l.CppLogMessageHandler] [controller/290] [Main.cc@109] controller (64 bit): Version 6.3.1 (Build 4d0b8f0a0ef401) Copyright (c) 2018 Elasticsearch BV
elasticsearch_master    | [2018-07-19T06:18:52,337][INFO ][o.e.d.DiscoveryModule    ] [elk-test1] using discovery type [zen]
elasticsearch_master    | [2018-07-19T06:18:53,084][INFO ][o.e.n.Node               ] [elk-test1] initialized
elasticsearch_master    | [2018-07-19T06:18:53,084][INFO ][o.e.n.Node               ] [elk-test1] starting ...
elasticsearch_master    | [2018-07-19T06:18:53,228][INFO ][o.e.t.TransportService   ] [elk-test1] publish_address {172.16.10.181:9300}, bound_addresses {0.0.0.0:9300}
elasticsearch_master    | [2018-07-19T06:18:53,252][INFO ][o.e.b.BootstrapChecks    ] [elk-test1] bound or publishing to a non-loopback address, enforcing bootstrap checks
elasticsearch_master    | [2018-07-19T06:18:56,331][INFO ][o.e.c.s.MasterService    ] [elk-test1] zen-disco-elected-as-master ([0] nodes joined)[, ], reason: new_master {elk-test1}{QA8gJuhmQIuP69CHsH0Lig}{ZeBl4ZoaTK241LxHi7Pdyg}{172.16.10.181}{172.16.10.181:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}
elasticsearch_master    | [2018-07-19T06:18:56,342][INFO ][o.e.c.s.ClusterApplierService] [elk-test1] new_master {elk-test1}{QA8gJuhmQIuP69CHsH0Lig}{ZeBl4ZoaTK241LxHi7Pdyg}{172.16.10.181}{172.16.10.181:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true}, reason: apply cluster state (from master [master {elk-test1}{QA8gJuhmQIuP69CHsH0Lig}{ZeBl4ZoaTK241LxHi7Pdyg}{172.16.10.181}{172.16.10.181:9300}{ml.machine_memory=2147483648, xpack.installed=true, ml.max_open_jobs=20, ml.enabled=true} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)[, ]]])
elasticsearch_master    | [2018-07-19T06:18:56,376][INFO ][o.e.x.s.t.n.SecurityNetty4HttpServerTransport] [elk-test1] publish_address {172.16.10.181:9200}, bound_addresses {0.0.0.0:9200}
elasticsearch_master    | [2018-07-19T06:18:56,376][INFO ][o.e.n.Node               ] [elk-test1] started
elasticsearch_master    | [2018-07-19T06:18:57,418][INFO ][o.e.l.LicenseService     ] [elk-test1] license [5f6bafe4-45df-455b-8d8e-c2b55005ec26] mode [trial] - valid
elasticsearch_master    | [2018-07-19T06:18:57,437][INFO ][o.e.g.GatewayService     ] [elk-test1] recovered [4] indices into cluster_state
elasticsearch_master    | [2018-07-19T06:18:58,004][INFO ][o.e.c.r.a.AllocationService] [elk-test1] Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.triggered_watches][0], [.monitoring-es-6-2018.07.19][0], [.watcher-history-7-2018.07.19][0]] ...]).
elasticsearch_master    | [2018-07-19T06:19:59,126][INFO ][o.e.c.m.MetaDataCreateIndexService] [elk-test1] [.monitoring-alerts-6] creating index, cause [auto(bulk api)], templates [.monitoring-alerts], shards [1]/[0], mappings [doc]
elasticsearch_master    | [2018-07-19T06:19:59,437][INFO ][o.e.c.r.a.AllocationService] [elk-test1] Cluster health status changed from [YELLOW] to [GREEN] (reason: [shards started [[.monitoring-alerts-6][0]] ...]).
elasticsearch_master    | [2018-07-19T06:19:59,620][INFO ][o.e.c.m.MetaDataMappingService] [elk-test1] [.watcher-history-7-2018.07.19/KdEXnDqxQte2sfXKeTk2oQ] update_mapping [doc]``

Configurations look fine, elasticsearch looks fine.

I see you're able to access elasticsearch at localhost, are you able to ping it from the kibana machine via 172.16.10.181?

Can you try temporarily disabling ssl verification to narrow down if it's a certificate error?

I disabled all the x-pack security settings but again ran in to same issues,

Here is the kibana logs
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:tilemap@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:watcher@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:index_management@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:graph@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:security@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:grokdebugger@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:logstash@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:10Z","tags":["status","plugin:reporting@6.3.1","error"],"pid":1,"state":"red","message":"Status changed from red to red - No Living connections","prevState":"red","prevMsg":"Unable to connect to Elasticsearch at http://172.16.10.181:9200."}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:12Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://172.16.10.181:9200/"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:12Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:16Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://172.16.10.181:9200/"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:16Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:19Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://172.16.10.181:9200/"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:19Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:23Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"Unable to revive connection: http://172.16.10.181:9200/"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:23Z","tags":["warning","elasticsearch","admin"],"pid":1,"message":"No living connections"}
kibana | {"type":"log","@timestamp":"2018-07-20T05:41:25Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"}

Are you able to curl -XGET elasticsearh from the kibana server? If you're using containers for example and they need to be networked together. And ports may need to be opened too if there's a firewall in the way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.