Kibana server is not ready yet after upgrade 6.4 to 6.5

Hi,

After have successfuly updated my hot swarm stack from 6.4.2 to 6.5.4, all works fine instead Kibana (rolling upgrade).
So I deployed a new instance to install a new Kibana, but same issue :frowning:

  • Kibana starts normally
  • port ok
  • high CPU usage
  • no web GUI as the "server is not ready", even after 2hours of waiting :frowning:
    The ES and Kibana log seem to be normal.
    The only strange things that I found is the CPU usage of the Kiabana nodes over 100%.

I've already:

  • checked and cleaned the .kibana* indices
  • "index.blocks.read_only_allow_delete" : false
  • "cluster.routing.allocation.enable": null
  • NODE_OPTIONS="--max_old_space_size=4096"

The optimize command for Kibana has crashed the first time.
FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory

Now I've adding the following node option:
echo 'NODE_OPTIONS="--max_old_space_size=4096"' >> /etc/default/kibana
The run is still ongoing (almost 15min) and the concerned logs are the same and still stuck with this last line (standard one):

{"type":"ops","@timestamp":"2019-01-16T14:34:25Z","tags":[],"pid":15328,"os":{"load":[1.56396484375,2.2734375,2.244140625],"mem":{"total":33568444416,"free":28672569344},"uptime":4936568},"proc":{"uptime":126.805,"mem":{"rss":837775360,"heapTotal":778944512,"heapUsed":362788664,"external":17211884},"delay":99.98171710968018},"load":{"requests":{},"concurrents":{"0":0},"responseTimes":{},"sockets":{"http":{"total":0},"https":{"total":0}}},"message":"memory: 346.0MB uptime: 0:02:07 load: [1.56 2.27 2.24] delay: 99.982"}

ES /

{
  "name" : "master_1",
  "cluster_name" : "ES",
  "cluster_uuid" : "-------------------",
  "version" : {
    "number" : "6.5.4",
    "build_flavor" : "default",
    "build_type" : "rpm",
    "build_hash" : "d2ef93d",
    "build_date" : "2018-12-17T21:17:40.758843Z",
    "build_snapshot" : false,
    "lucene_version" : "7.5.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

ES health
1547649816 14:43:36 ES green 8 6 326 163 0 0 0 0 - 100.0%

Kibana uptime
15:47:49 up 57 days, 3:29, 3 users, load average: 1.17, 1.37, 1.70

Kibana top
15328 root 20 0 2678560 1.5g 13304 R 175.0 4.8 18:55.29 node

Any help will be very appreciated :smiley:
Thanks in advance and have a nice day.

Guillain

Fixed!

I don't think it's a miracle... I'll summarize my actions. If someone has information regarding the Kibana and ES mecanism it can improve my understanding, thanks in advance!

My last tests were:

  • optimize on (the original) Kibana
    • ended in the same way than before: crashed due to the out of mem (nodejs)
  • the second kibana instance has finished to start properly 2h after (in paralell I'm executing the action in the first one + ES)

I imagine that's due to the old index reviewed by Kibana... but need clarification to understand it properly.
If you have docs to read to drive my study it will be perfect!

Thanks to all and enjoy :-p

@tiagocosta one more for you.

Thanks,
Bhavya

@Guillain your problem looks related with the optimization process itself and not with the kibana indices.

We have made several improvements on this matter that will come along by 6.6.x and 6.7.x .
Meanwhile I believe the best workaround to deal with optimization stalls would be:

  1. Stop kibana.
  2. Set the current directory to the Kibana installation dir. (For example cd /usr/share/kibana )
  3. rm -rf optimize/bundles
  4. NODE_OPTIONS="--max-old-space-size=4096" ./bin/kibana
  5. Kibana should optimize and start normally

If, otherwise, your problem were related with the kibana indices, you can find more info on the following thread: https://github.com/elastic/kibana/issues/25806

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.