Querying slow on kibana after re installation of docker

Hi All,

After doing a clean reinstall of my ELK stack, querying the same reindexed data takes much longer, it now with saved queries that would be a few seconds previously now pop up with this:

Your query is taking awhile

Run beyond timeout

an example query: "corona virus" OR "covid" OR "covid-19" OR "coronavirus"

I havent changed anything in any of the config files other than i did try to upgrade from 7.7.1 > 7.8 before reinstalling again and going back to 7.7.1

Any ideas on what to check?



RAM & Heap are the same? This was all on Docker, so you used same Volumes? How did you 'go back' to 7.7.1? Just older container on same volume?

So now on the same version as before, just slower? Strikes me as RAM/Heap settings to the containers. Maybe swapping, etc.

Thanks Steve!

I havent changed the yml/yaml files and so the heapsize will be the same. I deleted all containers and volumes and then just did a fresh install of 7.7.1.

Essentially everything is exactly the same theoretically. I imagine its something to do with docker?



Hmm, no idea - and it was fine again going back to 7.7.1? All using volumes in Docker, so storage the same? Both versions were in Docker? Just looking for what can be different.

Yep both versions were in docker, but i did reinstall docker. yeah it was fine going to 7.7.1 because i deleted everything and then reinstalled (including the indexes)· the site is currently public if you want the direct url?

here's my yaml:

version: '3.2'


image: docker.elastic.co/elasticsearch/elasticsearch:7.7.1
- 9200:9200
- node.name=elasticsearch
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms8000m -Xmx8000m"
- "LS_JAVA_OPTS=-Xmx3g"
soft: -1
hard: -1
- ./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
- esdata:/usr/share/elasticsearch/data
- elk
restart: always

image: docker.elastic.co/kibana/kibana:7.7.1
- elasticsearch
- 5601:5601
- ./config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
- ./config/kibana-plugins:/usr/share/kibana/plugins:ro
- ./config/certs:/etc/certs
- kbndata:/usr/share/kibana/data
- ES_JAVA_OPTS="-Xmx8000m -Xms8000m"

Hmm, looks okay and simple enough - But you are saying the NEW 7.7.1 is slower than the old, i.e. BOTH 7.8 and new 7.7 are much worse than your old 7.7?

You are sure old container not have 16GB of HEAP or something? Or swap or other stuff going on with this VM? Makes no sense two identical 7.7 installs on same machine have vastly different performance - just can't be unless there is some big history, config, index/shards difference.

Was thinking about history like caching, though, i.e. old system was caching a lot - you of course have to run your query several times to get good results. Also maybe you query right after loading and have huge number of segments that have not yet merged, slowing things down.

If it's just 7.8 and you have disk space, you can setup up BOTH versions at once, load same data, and then query via curl (no need for Kibana) to just test apples to apples - would be nice test bed to help Elastic folks also compare and see 7.7 vs. 7.8

How much data, how many indexes, how many shards?

Hey - Yes both are worse.

I imagine it could be caching, but i didnt think it would slow down queries this much. is there anyway to force caching so i can see an increase?

The old container was built out of this exact yaml so it wouldnt be different.

The data is quite complex json, but there's only 38k documents on this particular index and each isnt that big. the index pattern has about 200 mappings. 1 shard, one replica.

I'd love to do a comparison but dont think i have the space unfortunately :frowning:
thanks alot for your help, any other ideas?


Hmm, no idea - for caching, just do the query once and then do it again, though if you are also writing it'll invalidate the cache on that shard/index. Note you won't get replicas on a one node system, so I'd suspect you'll be yellow?

yes its yellow, would another node improve performance?

No idea, as went from seconds to 10X longer or worse, it seems, with no obvious changes, so something sure changed, a lot. I wonder if old cluster had lots of settings tuning, on refresh, mappings, not sure what else that made it much faster; all would have been lost on new cluster.

Normally two nodes and thus splitting an index over twice as many CPUs, drives, etc. will improve performance, but you'd have to test.