Kibana discover tab is lagging

Running elk stack 7.11
Issue is while searching 2 months data in kibana's discover it is taking more than 5 minutes to show the result the index size is 100gb we did top command and we didn't see any serious thing cpu is fine memory consumption is fine we can't find any solution we had this problem when we run elk on single cluster but now we 3 master nodes of elasticsearch 6 data nodes and 2 client nodes

@Aniket_Pant You might have a very short refresh_interval set. Can you check and maybe change that?

Hi @Aniket_Pant

A couple more things.

When you look at Discover -> Inspect what do you see as the Query Time and Round Trip Time

Is this a daily index or 1 index that covers 2 months?

Have you looked at the number of segment?

GET _cat/segments/my-index-*/?v

Another cause can be a lot of segments... if this is a daily index you should force merge on rollover to 1 segment.

You can also take the query from inspect and go to Dev Tools and run it in the Query Profiler to see what is taking so long.

Let us know what you see.

hi @stephenb sorry for replying late
GET /_cat/segments/log-wlb-sysmon-*?v

index                            shard prirep ip             segment generation docs.count docs.deleted    size size.memory committed searchable version compound
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _1wsi        89154    8437226            3     5gb      104108 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _45qz       194075    8091670         3790     5gb      127596 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _5sox       270465    5713683            0   3.4gb      119484 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _7kob       353387    8161498            5     5gb      121308 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _abyr       482067    7959820          677     5gb      123364 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _cd9e       577058    8224065          557     5gb      127860 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _ddt9       624429    8655234         1057   4.9gb      127092 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _fgze       721850    7992716            0   4.7gb      125036 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _huuk       833132    8152441         4716     5gb      126252 true      true       8.7.0   false
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _j9y9       899361     812124            0   511mb      109548 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _jy5f       930723    8032103         3026   4.8gb      124124 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _k0d7       933595     223623            0 147.8mb      112428 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _k1op       935305     341661         3507 221.6mb      108556 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _k9qi       945738      22428         1509  17.1mb      101492 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _k9ww       945968      22840          653  16.9mb      106812 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _ka0s       946108    1033873            0 673.8mb      114628 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _kab4       946480      38374         1657  26.6mb      101748 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _kail       946749      55492         3791  35.3mb      109396 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _kasb       947099      28031         2772  22.1mb      105612 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _kb54       947560      63202         3123  45.6mb      110196 true      true       8.7.0   true
log-wlb-sysmon-2021.06.09-000007 0     r      ed6 _kbjt       948089     115894         8451  

There are so many reasons why a query can be slow.....

So it is only 1 index with 1 Shard or is that only a portion of the out put?

Are you still writing to that index? If so you can not / should not merge it. If you are no longer writing to it you can force_merge it to 1 segment.

That index that has not been merged into a single segment. (that can help but probably not the reason

What kind of storage? Node?

Is Kibana pointing at 1 of the client nodes?

Show us what the Inspect looked like after you ran Discover can you see the query time?

What does this look like?

yes but it is writing to new index

when my index consume 100gb(including replicas) it will rollover to the new index and old index will force_merge

I am using 2 client nodes

I am using 11 node of elasticsearch cluster 3 master + 6 data node+2 coordinate/client node
Earlier i didn't apply force_merge policy in my ilm(winlogbeat_sysmon_policy) i applied this 1 week before and i think older index(more than one month) has not force_merge policy

health status index                            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   log-wlb-sysmon-2021.06.09-000007 9CmsyVUGT26UwxID3LT8Xw   1   1   82308203        53179      100gb           50gb
green  open   log-wlb-sysmon-2021.05.31-000006 nL70VTUmQ7iuPsJusGHBsw   1   1   85460152        89372     99.9gb           50gb
green  open   log-wlb-sysmon-2021.06.25-000009 HXwRnl8QRq6F_BCmztlBHw   1   1   78000899        38081      100gb           50gb
green  open   log-wlb-sysmon-2021.07.10-000010 cL1mz5t4RWyNXaoCFmU-cw   1   1   65473320            0     83.1gb         41.5gb
green  open   log-wlb-sysmon-2021.06.17-000008 VMIdvZhhQ7eDrtBrrk2t0w   1   1   81276018        46164      100gb         49.9gb
green  open   log-wlb-sysmon-2021.07.11-000011 rhKip49IRLeK2Ifjzodd-g   1   1   83644074        82230    100.6gb         50.3gb

log-wlb-sysmon-2021.07.10-000010 has consumed 83.1gb of data, it did force_merged becuase it consumed 100gb of data after force_merge it become 83.1gb of data

I meant run discover for your 2 months of Data... and show the query and round trip time.

Yes on rollover you should force merge.

So yes you have multiple Indices (all the ones that have been rolled over should be force merged) I suspect they have not been.

You can force_merge the older indices with the API here

When I asked about your nodes, not just the count what is the storage SSD, HDD, EBS (Network), How much RAM and CPU and JVM Heap.

You can have lots of nodes but if they are not the correct configuration you may not get the performance you like.

the data i am showing you is from May 31 to July 22

  1. master node h/w configuration(3 master nodes)
    each with 50gb of ssd storage and 15gb of ram(heap size 7gb)
  2. data node h/w configuration
    each with 1tb of ssd storage and 64gb of ram(heap size 32gb)
  3. client node h/w configuration(2 client nodes)
    each with 50gb of ssd storage and 15gb of ram(heap size 7gb)

i want to do this but i want to reindex some indexes so force_merge can be apply on old index which is not writable
How can i check EBS ?

Yes it should run faster than that.

Don't worry about EBS it is a different type of storage which you are not using, it can cause poor performance.

32 GB Heap is not correct, are you explicitly setting it to 32GB? or are you letting elasticsearch automatically set it. See Here If you are explicitly setting the JVM Heap please set it to 28GB to start see here

How many CPU on the Data Nodes?

I do not understand this. You can run force merge on the older indices that is independent of / does not require reindex. Even if there is some reason you want to reindex I would forcemerge first. I strongly recommend trying the force merge. You can run force merge on an open index

Run this on your All you older / not current indices. Then add Force Merge on Hor Rollover to your ILM.

Run this on all your old indices.

POST log-wlb-sysmon-2021.05.31-000006/_forcemerge/?max_num_segments=1

hey @stephenb can i did this POST log-wlb-sysmon-2021.05.31-000006,log-wlb-sysmon-2021.07.10-000010/_forcemerge/?max_num_segments=1 this index is no longer writing but og-wlb-sysmon-2021.05.31-000006 size is increasing earlier it was 100gb now it is 125gb

health status index                            uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   log-wlb-sysmon-2021.07.22-000012 SMs5nFCZTuWbxa3sItnR1g   1   1   79735395         3884    101.3gb         50.6gb
green  open   log-wlb-sysmon-2021.06.09-000007 9CmsyVUGT26UwxID3LT8Xw   1   1   82308203        53179      100gb           50gb
green  open   log-wlb-sysmon-2021.05.31-000006 nL70VTUmQ7iuPsJusGHBsw   1   1   85460152        14167    131.5gb           65gb
green  open   log-wlb-sysmon-2021.06.25-000009 HXwRnl8QRq6F_BCmztlBHw   1   1   78000899        38081      100gb           50gb
green  open   log-wlb-sysmon-2021.07.28-000013 6as7-RLlQpest5UkDIp5Rg   1   1   10839492        29847     13.8gb          6.9gb
green  open   log-wlb-sysmon-2021.07.10-000010 cL1mz5t4RWyNXaoCFmU-cw   1   1   65473320            0     83.1gb         41.5gb
green  open   log-wlb-sysmon-2021.06.17-000008 VMIdvZhhQ7eDrtBrrk2t0w   1   1   81276018        46164      100gb         49.9gb
green  open   log-wlb-sysmon-2021.07.11-000011 rhKip49IRLeK2Ifjzodd-g   1   1   83783780            0    101.4gb         50.7gb

It's probably not finished yet...

What does the cat segments show.

When force merge is finished the deleted docs will be 0.

earlier log-wlb-sysmon-2021.07.10-000010 index has 100gb of shard size(primary+replica) so when i enable force_merge option in ilm it size reduces to 12gb so it has 83.1gb of data but now today by mistakenly i run this POST log-wlb-sysmon-2021.05.31-000006,log-wlb-sysmon-2021.07.10-000010/_forcemerge/?max_num_segments=1

GET /_cat/segments/log-wlb-sysmon-2021.07.10-000010?v

index                            shard prirep ip             segment generation docs.count docs.deleted   size size.memory committed searchable version compound
log-wlb-sysmon-2021.07.10-000010 0     p      xx.xx.xx.xx _1cs4        63220   65473320            0 41.5gb      200876 true      true       8.8.2   false
log-wlb-sysmon-2021.07.10-000010 0     r      xx.xx.xx.xx _1d3c        63624   65473320            0 41.5gb      200876 true      true       8.8.2   false

GET /_cat/segments/log-wlb-sysmon-2021.05.31-000006?v

index                            shard prirep ip             segment generation docs.count docs.deleted    size size.memory committed searchable version compound
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _2l0p       120553    8220117         5173   4.8gb      102140 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _4nra       217414    7821264         1354   4.6gb      100324 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _6s4j       316387    8379249         1592   4.9gb      102156 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _96oy       428578    8372084            0     5gb      100716 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _bovn       545459    8384858         5360   4.9gb       95724 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _e6dm       661450    8393580            0     5gb       99564 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _g9mv       758983    8469205            0   4.8gb      100596 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _hpby       825982    8443943          921   4.7gb       94948 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _j92e       898214    8595131         2239     5gb      106068 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kofu       964794    9112203         3900     5gb      103732 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kq2q       966914     320116         1654 187.7mb       84276 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kulu       972786       9220         1466   6.9mb       79796 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kun7       972835       7014          488   5.5mb       72748 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuqt       972965      12288         1307   8.7mb       83300 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kutb       973055       8214          868   5.9mb       79196 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kutl       973065     892836         9624 502.7mb       86556 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuu6       973086       1296          164     1mb       80932 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuut       973109       2191          281   1.5mb       72220 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuv2       973118       1740          184   1.5mb       72308 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuva       973126       2029          212   1.7mb       58532 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuvm       973138       1888          185   1.7mb       59868 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuvv       973147       1946          415   1.3mb       57124 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuw4       973156       1481          184   1.3mb       71860 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuwo       973176       2105          190   1.6mb       81684 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kuwy       973186       2398          371   1.8mb       81700 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kux8       973196       1756          338   1.4mb       81756 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _2pav       126103    8170276         9368   4.8gb      101932 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _4tox       225105    8338982          576   4.9gb      102716 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx_74xd       332977    8387468            0     5gb      102524 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _9hoi       442818    8196044         2077   4.8gb      100484 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _c8fj       570799    8112405         1844   4.8gb       95348 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _eisv       677551    8065736            0   4.8gb       99084 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _gcuy       763162    5501984            0   3.2gb       97028 true      true       8.7.0   true
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _hb2n       807503    8813320            0     5gb      100844 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _isoe       876974    8733352          254     5gb      104612 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _khcb       955595    8694388           48     5gb      105260 true      true       8.7.0   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _kz90       978804    4446197            0   2.3gb       93924 true      true       8.8.2   true

hey @stephenb the force_merge process is finish now but what is the difference

index                            shard prirep ip             segment generation docs.count docs.deleted   size size.memory committed searchable version compound
log-wlb-sysmon-2021.05.31-000006 0     r      xx.xx.xx.xx _kux9       973197   85460152            0 51.1gb      196252 true      true       8.8.2   false
log-wlb-sysmon-2021.05.31-000006 0     p      xx.xx.xx.xx _kz91       978805   85460152            0 51.1gb      196252 true      true       8.8.2   false

sorry i am forgot to ask you about this heap size and in official documentation of elasticsearch it says that for 600 shards you need 32gb of heap memory

Glad the force merge finished. This index is now more efficient to search. Less segments equal more efficient.

For the JVM did you read the Doc I linked if you set to 32GB the the cluster is not optimized in fact it will run poorly! 50% of Host RAM is a guide but as you approach 30GB it changes.

From the docs I linked

Set Xms and Xmx to no more than the threshold for compressed ordinary object pointers (oops). The exact threshold varies but 26GB is safe on most systems and can be as large as 30GB on some systems. To verify you are under the threshold, check the Elasticsearch log for an entry like this:

heap size [1.9gb], compressed ordinary object pointers [true]

Here is official docs on Shard sizing

Aim for 20 shards or fewer per GB of heap memory ... Personally I am towards 15.

1 Like

@We are setting it to 28gb

can you tell me that i am merging one index at a time it takes time can we force merge multiple indices at the same time

See Here : Force merge API | Elasticsearch Guide [8.11] | Elastic

Force merging multiple indices

You can force merge multiple indices with a single request by targeting:

  • One or more data streams that contain multiple backing indices
  • Multiple indices
  • One or more aliases
  • All data streams and indices in a cluster

Each targeted shard is force-merged separately using the force_merge threadpool. By default each node only has a single force_merge thread which means that the shards on that node are force-merged one at a time. If you expand the force_merge threadpool on a node then it will force merge its shards in parallel.

Force merge makes the storage for the shard being merged temporarily increase, up to double its size in case max_num_segments parameter is set to 1 , as all segments need to be rewritten into a new one.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.