Error: plugin:elasticsearch ! Service Unavailable

INFORMATION

Kibana 5.2 and Elasticsearch 5.2 installed on the same host from yum repos (CentOS 7).
Kibana is configured as a coordinating only node according to this document. My kibana.yml file looks like this:

server.port: 5601
server.name: "s-ut-kibana-1"
server.host: 10.1.0.21
elasticsearch.url: "http://localhost:9200"

The elasticsearch.yml file looks like this:

cluster.name: es_cluster_draper_ut
node.name: s-ut-kibana-1.stc.local
bootstrap.memory_lock: false
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ['10.1.0.20', '10.1.0.21']
node.master: false
node.data: false
node.ingest: false

I have one elasticsearch data node (for now) at IP address 10.1.0.20. The Kibana server is 10.1.0.21. If I curl localhost, I get:

# curl -XGET 'localhost:9200/?pretty'
{
  "name" : "s-ut-kibana-1.stc.local",
  "cluster_name" : "es_cluster_draper_ut",
  "cluster_uuid" : "_na_",
  "version" : {
    "number" : "5.2.0",
    "build_hash" : "24e05b9",
    "build_date" : "2017-01-24T19:52:35.800Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.0"
  },
  "tagline" : "You Know, for Search"
}

If I curl 10.1.0.20, i get:

# curl -XGET '10.1.0.20:9200/?pretty'
{
  "name" : "s-ut-elastic-1.stc.local",
  "cluster_name" : "es_cluster_draper_ut",
  "cluster_uuid" : "Xf7jgC6UQPi8xFSvSW8ukw",
  "version" : {
    "number" : "5.2.0",
    "build_hash" : "24e05b9",
    "build_date" : "2017-01-24T19:52:35.800Z",
    "build_snapshot" : false,
    "lucene_version" : "6.4.0"
  },
  "tagline" : "You Know, for Search"
}

So I know Kibana can access the data nodes.

PROBLEM

When I open the Kibana web interface, I get:

However, if I modify the kibana.yml file so

elasticsearch.url: "http://localhost:9200"

instead says:

elasticsearch.url: "http://10.1.0.20:9200"

and restart Kibana, I get:

Elasticsearch for sure is running on the Kibana host, and I show port 9200 as listening. So, what is wrong with my config that it will not let me point Kibana to the elasticsearch instance on localhost?

Did you already have a .kibana index on any of the nodes prior to creating this?
It's the only way I've managed to reproduce the issue on my side.
Here are the steps that I took to make sure this works without a hitch:

  1. Start the clean data cluster first. (or with the .kibana index deleted from it - this will delete any visualizations, objects or dashboards stored in Kibana)
  2. Start the clean coordinating node.
  3. Start Kibana and wait for it to create the index (only happens on the first run, it will take a bit more than a normal one as it's created via the coordinating node on the data node).

This is a brand new setup, the index hasnt been created yet. There are no logs yet ingested in Elasticsearch as I have been following the recommended deployment guide located here:

https://www.elastic.co/guide/en/elastic-stack/current/installing-elastic-stack.html

Like I mentioned, if I point Kibana to a data node (via the elasticsearch.url", then it wants to create an index. If I point it at localhost, it says "service unavailable".

When you point it at the data node it works just fine because it has write permissions there, but on the coordinating node it doesn't have them and somehow the coordinating one is failing to route request to the other node. Did you start them in the same order that I mentioned in the previous comment?

It looks like the solution was to open port 9300 on the coordinating only node for traffic coming from the other elasticsearch nodes. The documentation does not mention firewall exceptions in any of the configure kibana/elasticsearch documents, so it took looking at the iptables logs to find the needed ports.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.