Unable to connect to Kibana

Hello,

I have configured ELK with a nginx in front for HTTPS and authentication. This setup worked perfectly until this morning (I stopped the server for days and restarted it).

WHen I try to connect to kibana with Firefox I have the following message "Firefox can’t establish a connection to the server at 192.168.100.100"

I can ping my server and the port 443 is open and listening.

On the server I can curl my elasticsearch "curl http://localhost:9200"

I looked in Kibana and nginx logs but I have no specific error.

DIid someone already have similar issues?

Why not use free Security to do this?

If I am not mistaken, when I implemented this setup this functionality was not part of the free license.
Before I stopped and restarted the server, this setup worked perfectly and I don't figure out the issue I have now.

We use a very similar setup.

First, make sure the host where Nginx is running can connect to Kibana on the correct port. You can try something like

$ nc -v -z my_kibana_ip 5601
my.kibana.local [10.0.0.1] 5601 (?) open

or

$ curl -s -I http://my_kibaba_ip:5601
HTTP/1.1 302 Found
location: /spaces/enter
kbn-name: my.kibana.local
kbn-license-sig: d865465067a482e8b79266a0cacbbf52067452bb2d7eda73c659993b0a054995
kbn-xpack-sig: ef42adb69cd0ad9c36eb0323e8819b1d
cache-control: private, no-cache, no-store, must-revalidate
content-length: 0
Date: Thu, 06 Aug 2020 10:02:12 GMT
Connection: keep-alive

If that works, next check your Nginx config. You should definitely see some errors in the logs if Nginx can not reach one of the backends.

Yes it works. Nginx can connect to Kibana.

I thought it was a network issue but apparently it is not because when I look at the elasticsearch log I find the below error:

[2020-08-06T12:37:10,303][WARN ][r.suppressed             ] [miXbRBA] path: /.kibana_task_manager/_doc/_search, params: {ignore_unavailable=true, index=.kibana_task_manager, type=_doc}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed

[...]

[2020-08-06T12:45:42,739][INFO ][o.e.c.r.a.DiskThresholdMonitor] [miXbRBA] low disk watermark [85%] exceeded on [miXbRBAXTkiAo4KZhbFevA][miXbRBA][/var/lib/elasticsearch/nodes/0] free: 1.3gb[13.3%], replicas will not be assigned to this node

So apparently it is due to low disk space but I am not sure how I can bypass the alert or move older data manually.

I have been in that situation a few times as well.

The quickest and easiest solution is deleting some indices if you can afford to do that (you will loose data).

You could possibly also momentarily decrease the replication factor if that is set high. It will decrease resilience but might be acceptable for some time.

Other less easy approaches would be to add ES nodes to the cluster to add total capacity. The cluster will balance storage over time and recover from your current situation.

Or you can add a snapshot repo and backup data if you can't afford to loose any of the data permanently.

What is the proper procedure to safely delete old indices ?

I removed old indices and disk spaces is now free at 62%. But I still cannot access the kibana UI.

Is the ES cluster health green?

This should show if there are any problems still with ES

curl -XGET es_host:9200/_cluster/allocation/explain?pretty

Check for explanation in the output.

Kibana can take a bit of time to start. Kibana logs are not always that clear I find as to what is the problem, but there should definitely be something in the logs if Kibana can't start.

Did you try netcat or something similar like lsof

$ nc -v -z my_kibana_ip 5601
my.kibana.local [10.0.0.1] 5601 (?) open

Is Kibana listening on its port or not?

Kibana start correctly but with some info logs:

 kibana.service - Kibana
     Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
     Active: active (running) since Thu 2020-08-06 14:06:34 UTC; 34min ago
   Main PID: 2876 (node)
      Tasks: 11 (limit: 9452)
     Memory: 254.9M
     CGroup: /system.slice/kibana.service
             └─2876 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml

Aug 06 14:06:46 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:46Z","tags":["status","plugin:security@6.8.9","info"],"pid":2876,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yel
low","prevMsg":"Waiting for Elasticsearch"}
Aug 06 14:06:46 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:46Z","tags":["status","plugin:maps@6.8.9","info"],"pid":2876,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow"
,"prevMsg":"Waiting for Elasticsearch"}
Aug 06 14:06:46 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:46Z","tags":["reporting","warning"],"pid":2876,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from fail
ing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
Aug 06 14:06:47 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:47Z","tags":["status","plugin:reporting@6.8.9","info"],"pid":2876,"state":"green","message":"Status changed from uninitialized to green - Ready","prevSta
te":"uninitialized","prevMsg":"uninitialized"}
Aug 06 14:06:47 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:47Z","tags":["license","info","xpack"],"pid":2876,"message":"Imported license information from Elasticsearch for the [monitoring] cluster: mode: basic |
status: active"}
Aug 06 14:06:47 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:47Z","tags":["info","migrations"],"pid":2876,"message":"Creating index .kibana_1."}
Aug 06 14:06:48 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:48Z","tags":["info","migrations"],"pid":2876,"message":"Pointing alias .kibana to .kibana_1."}
Aug 06 14:06:48 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:48Z","tags":["info","migrations"],"pid":2876,"message":"Finished in 446ms."}
Aug 06 14:06:48 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:48Z","tags":["listening","info"],"pid":2876,"message":"Server running at http://192.168.100.100:5601"}
Aug 06 14:06:48 KIBANA-srv kibana[2876]: {"type":"log","@timestamp":"2020-08-06T14:06:48Z","tags":["status","plugin:spaces@6.8.9","info"],"pid":2876,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yello
w","prevMsg":"Waiting for Elasticsearch"}

With the following command

curl -XGET es_host:9200/_cluster/allocation/explain?pretty

I have this output:

{
  "index" : "winlogbeat-2020.08.06",
  "shard" : 3,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2020-08-06T14:35:54.238Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "miXbRBAXTkiAo4KZhbFevA",
      "node_name" : "miXbRBA",
      "transport_address" : "127.0.0.1:9300",
      "node_attributes" : {
        "ml.machine_memory" : "8349188096",
        "xpack.installed" : "true",
        "ml.max_open_jobs" : "20",
        "ml.enabled" : "true"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "enable",
          "decision" : "NO",
          "explanation" : "replica allocations are forbidden due to cluster setting [cluster.routing.allocation.enable=primaries]"
        },
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists [[winlogbeat-2020.08.06][3], node[miXbRBAXTkiAo4KZhbFevA], [P], s[STARTED], a[id=LveDU1oJTzWRySSK59yGyQ]]"
        }
      ]
    }
  ]
}

I tried to delete this index but I still have the same message.

For further information:

curl localhost:9200/_cat/indices?v

health status index                 uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .kibana_1             dTDKefezRzaLXbuWj9qrig   1   0          2            0      7.6kb          7.6kb
green  open   .kibana_task_manager  LsDYxnk6S0KpSTtmnuzv6A   1   0          2            0     12.6kb         12.6kb
yellow open   winlogbeat-2020.08.06 GGu_OxJGT3C5DuXiIlYsdA   5   1        146            0      1.9mb          1.9mb

For replica allocations are forbidden due to cluster setting [cluster.routing.allocation.enable=primaries]

you should do

curl -X PUT "es_host:9200/_cluster/settings?pretty" -H 'Content-Type: application/json' -d'
{
  "persistent": {
    "cluster.routing.allocation.enable": null
  }
}
'

That might solve the rest of the issues as well, not sure if you will have to initiate shard allocation with some sort of retry. I have had to do that in the past but it is not always needed.

@A_B Thanks for your help in this.

I have now the following error:

"allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",

Is this the reason you said I have to initiate shard allocation ?

"cannot allocate because allocation is not permitted to any of the nodes"

That could be a few things. The ones I have experienced myself would be

  1. Not enough free space. Nodes will not be allowed to create new shards if they are over their low water mark
  2. You have disabled shard allocation. Should not be that anymore.
  3. We use _ Cluster-level shard allocation_ configurations that could cause it

I know it is a bit to read but your answer should be in this document.

Hi @pchar,

I should have thought about this right away...

Check the cluster status

GET _cluster/settings

Sharing it here could help identify possible reasons for your current problem as well :slight_smile:

Please see the results below:

{"persistent":{},"transient":{"cluster":{"routing":{"allocation":{"disk":{"watermark":{"low":"90%"}},"enable":"all"}}}}}

That looks like what I would expect it to. Is Kibana still not starting @pchar ?

I am not sure of the reason but the issue changed. Kibana now starts I just received "502 bad gateway" from nginx. But nginx is on the same server and when i tried on the server it works:

curl -s -I http://192.168.100.100:5601
HTTP/1.1 302 Found
location: /app/kibana
kbn-name: kibana
kbn-xpack-sig: 5c37b7e6745e03f5023340da2bd310b6
content-type: text/html; charset=utf-8
cache-control: no-cache
content-length: 0
connection: close
Date: Fri, 07 Aug 2020 13:27:49 GMT

At least now the issue as evolved and looks not linked to Kibana anymore...

I had to make a small change in one nginx config file and now Kibana is working again.

Thanks @A_B for your support and your useful inputs.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.