Kibana is extremely slow (Loading graphs, new pages, etc.)

Ubuntu 18.04
16 Cores
16Gb RAM

ELK system
ELK 7.14
Single node (Kibana, Elastic, Logstash)
All was deployed with Ubuntu apt (Install Elasticsearch with Debian Package | Elasticsearch Guide [7.14] | Elastic)
Made sure to do bootstrap settings here
(Important System Configuration | Elasticsearch Guide [7.14] | Elastic)

Sending Zeek logs to system via Filebeat

Kibana is extremely slow to do anything. Some times it runs quick after a refresh but for the most part, graphs take forever to refresh, pages will act like they are loading and never load. Its an all around unusable system right now. Fairly certain this is a problem from some configuration I made.

Configuration files (With comments removed)

  • elasticsearch.yml /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
http.port: 9200
discovery.type: single-node

Notice I made elastic search bind to the external address.

  • kibana.yml
server.port: 5601 "localhost"
elasticsearch.hosts: [""]
  • logstash.yml /var/lib/logstash
path.logs: /var/log/logstash

Not sure if related but, I ran into an issue with logstash that logs showed it kept crashing. I think it needed a configuration file in /etc/logstash/conf.d. So I added /etc/logstash/conf.d/02-beats-input.conf

input {
  beats {
    port => 5044

That stopped the error but I don't know if caused other issues.

htop shows elasticsearch, kibana, and logstash taking up the most cpu time... but thats obvious as thats all that is running on the system.

Reading online most people seem to believe that this is an issue Elastic not Kibana. I am new to ELK so I really don't know where to start trying to trouble shoot this problem, any recommendations?

Hi @Dave_Houser Welcome to the community

Run these and report back

GET /_cluster/health

GET /_cat/nodes/?v&h=name,du,dt,dup,hp,hc,rm,rp,r

Thanks for the reply.
Where do I run these commands? I assume not from the boxes shell. Sorry for being such a noob :expressionless:

From Kibana -> Dev Tools

You can actually curl them from the elasticsearch host they would look something like

depends on if you are using Auth or Not these are simple REST Commands so if you are familiar with curl it would look something like this.

curl -X GET -u username:password https://host:9200/_cluster/health/?pretty

no auth no https

curl -X GET http://host:9200/_cluster/health/?pretty

Do not see a "Dev Tools" when selecting "Kibana" on the main web interface.

Here is the curl output for both commands:

  • Output for "curl -X GET"
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 1,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 10,
  "active_shards" : 10,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 1,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 90.9090909090909
  • Output for "curl -X GET ",du,dt,dup,hp,hc,rm,rp,r/?pretty""
name       du     dt   dup hp      hc     rm rp
elk-01 17.9gb 72.8gb 24.66 30 157.8mb 15.6gb 44

So debugging this could be a lot of things but first thing I notice is that is look like the JVM Heap for elasticsearch is very small.

Running everything on 1 box can be challenging from a resource contention perspective.

hp = heap Percent = 30%
hc = Heap Current : 157.8M
So your total heap is probably set at 512MB

So the first thing I would do is go into the jvm.options file and up the the JVM heap.

Technically on 7.14 it should auto calculated and it should have tried to take ~8GB of ram but I suspect you may have an old config.

try setting to 2 GB or even 4GB


I would start elasticsearch first and let it claim its memory.

But for a new person there can be a lot of reasons.... bad index design / mappings, bad dashboard design lots of things....

Running all on the same box... not recommended... fine for a bit of testing but if you want stability and performance not a great plan.

I figured it out... It was an IP conflict the whole time.... ugh :persevere:
Thank you for bringing up the heap size though, I did se Xmx and Xms to 512m based on another guide I was following, which looks to be dated. I adjusted the heap size to 2g for each values, hopefully this will avoid future issues. Closing this one out.


This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.