Kibana/ES too slow - Discover view

Hi everyone,

I try since few weeks to optimize my Kibana/ELK because it's really slow, and I really don't know what can I do more now :(.

When I update the Discover view, from one date to another by example, it takes more than 10 seconds to display, and I have only 33k documents in my index for now.
In the "Kibana/Inspect part", it indicates a query time of 330ms for a request data > 6000ms.

However when I have no documents, it's fast.

My conf is : (I precise we need to use docker, so we can't remove this part).

  • a cluster of 3 nodes for Elastic, each one of them is in a docker container on a remote host : XXX...137, XXX...140 and XXX... 143
    2 of them with JVM 8G, the third with JVM 4G (I don't have more RAM on this server right now).
    The data are stored directly on the host (no cloud or NFS mount). Only the certificates are saved in a NFS mounted server.

  • same for Kibana : in a docker container on a dedicated host XXX...136

My cluster is green. ES & Kibana are in version 7.11.0.
At the end, my index would have less than 1M documents so I have 1 primary shard & 1 replica.

Here is GET /_cat/nodes?v

ip             heap.percent ram.percent cpu load_1m load_5m load_15m node.role  master name
XXX..143           69          90   4    0.12    0.16     0.11 cdhilmrstw *      prod_es_node_03
XXX..137           32          47   2    0.14    0.14     0.24 cdhilmrstw -      prod_es_node_01
XXX..140           48          48   6    0.08    0.15     0.27 cdhilmrstw -      prod_es_node_02

Here is my docker-compose for each node :

version: "3.7"

services:
  es01:
    image: docker.elastic.co/elasticsearch/elasticsearch:${DOCKER_ES_VERSION}
    container_name: es01
    restart: unless-stopped
    environment:
      - cluster.name=${ENV}_docker-cluster
      - node.name=${ENV}_es_node_01
      - node.master=true
      - node.data=true
      - cluster.initial_master_nodes=${ENV}_es_node_01,${ENV}_es_node_02,${ENV}_es_node_03
      - discovery.seed_hosts=XXX..137,XXX..140,XXX..143
      - network.host=XXX..137
      - bootstrap.memory_lock=true
      - ES_JAVA_OPTS=-Xms8g -Xmx8g
      - ELASTIC_PASSWORD=${DOCKER_ES_PASSWORD}
      - xpack.security.enabled=true
      - xpack.security.http.ssl.enabled=false # Communication avec client http
      - xpack.security.http.ssl.verification_mode=certificate
      - xpack.security.http.ssl.key=${DOCKER_ES_CERTS_PATH}/es01/es01.key
      - xpack.security.http.ssl.certificate_authorities=${DOCKER_ES_CERTS_PATH}/ca/ca.crt
      - xpack.security.http.ssl.certificate=${DOCKER_ES_CERTS_PATH}/es01/es01.crt
        - xpack.security.transport.ssl.enabled=true # Communication entre nodes
        - xpack.security.transport.ssl.verification_mode=certificate
        - xpack.security.transport.ssl.certificate_authorities=${DOCKER_ES_CERTS_PATH}/ca/ca.crt
        - xpack.security.transport.ssl.certificate=${DOCKER_ES_CERTS_PATH}/es01/es01.crt
        - xpack.security.transport.ssl.key=${DOCKER_ES_CERTS_PATH}/es01/es01.key
        - path.logs=/usr/share/elasticsearch/my_logs
        - path.data=/usr/share/elasticsearch/data
        #- logger.org.elasticsearch.cluster.coordination.ClusterBootstrapService=TRACE
        #- logger.org.elasticsearch.discovery=TRACE
    network_mode: host
    volumes:
      - esData_01:/usr/share/elasticsearch/data
      - certs:${DOCKER_ES_CERTS_PATH}
    ulimits:
      memlock:
        soft: -1
        hard: -1
    logging:
      driver: "json-file"
      options:
        max-size: "100m"
        max-file: "3"

volumes:
  certs:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: "${LOCAL_DATA_CERTS_PATH}"

  esData_01:
    driver: local
    driver_opts:
      type: 'none'
      o: 'bind'
      device: "${LOCAL_DATA_ES_PATH_ES01}"

Here is docker stats for node es01 :

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
a45a2d904678        es01                5.24%               9.2GiB / 23.55GiB     39.07%              0B / 0B             0B / 32.5MB         126

Same for Kibana :

CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT   MEM %               NET I/O             BLOCK I/O           PIDS
e22190f819ea        kibana              0.03%               284.8MiB / 7.8GiB   3.57%               40MB / 26MB         0B / 0B             12

I don't get why I always have a low MEM USAGE for Kibana, but I read there is no way to increase the JVM for Kibana.

I also use the Opster tool to see if I could see something but nothing in particular.

So please if you have an idea, that would be so so great ! Because I'm completely stuck now and this state is not acceptable for my team for production environment.

Thanks for your help,

Do you have any large fields in your documents? This can slow discover down since it pulls 500 hits and transferring large fields across can be slow.

If you have large fields, you can exclude them from Discover by going to "Stack Management" -> "Index Patterns" and selecting your index pattern. Click "Field filters" tab and add your large fields to the filter list.

Thanks a lot Nathan for your answer.
I tried your ideao with a json field (type unknown), but no big changes, it stays slow. The other fields are quite normal (string, integer, geodata...).

In fact, I have 2 problems of performance :

  • Kibana is really slow to react when I click or move the cursor (around 4 seconds)
  • Discover takes more than 10 seconds (after the 4 seconds to react) to display

I think I have one problem of Kibana JVM and the other on ES cluster :frowning:

Below are some screenshots from monitoring, if you see something weird ?

Kibana

Es master node :

I don't understand why Kibana takes only 2Gb while it is installed in a docker container and its host has 8Gb RAM.

I'm also not sure if the recommanded 32Gb of JVM for ES is for the global cluster (so 10GB max for each node) or for each nodes (so 96GB max for 3 nodes cluster) ?

Thanks again if you can help me because I'm really stuck right now.

I tried your ideao with a json field (type unknown), but no big changes, it stays slow. The other fields are quite normal (string, integer, geodata...).

Is your geodata points or shapes? Shapes can get rather large if they have complex geometry.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.