Average size of document seems to be 18KB (total primary size/total documents).
We are running 5 shards (with 1 primary and 2 replicated, so total 15 shards) on the index.
@Christian_Dahlqvist
not sure about the reason for using HighCPU node , will find out, do u think it can make a difference?
yes, we also have a service which is indexing documents in parallel. However, the aggregation latency I have mentioned is for the time where not much indexing is going on. At the time when indexing rate is high, the performance is even worse (look at the image below)
Here's the output of cat/indices :
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open .monitoring-es-6-2019.07.19 xBVm7xQ-RQeycENBX3A8dQ 1 1 59030 252 86.2mb 45.9mb
green open .monitoring-beats-6-2019.07.18 3UO99F09TGeYJjtYf8XqKw 1 1 31 0 725.4kb 362.7kb
green open .watcher-history-9-2019.07.18 5b0hfi3RQLKPgZXoyi8FJQ 1 1 8640 0 23.5mb 11.6mb
green open .kibana_7 PjxwMPIRR-OTBVkPrYf1VQ 1 1 11 0 99.2kb 49.6kb
green open .monitoring-kibana-6-2019.07.18 SbhRB6G3RO-Jktl2h741Fw 1 1 8643 0 5mb 2.5mb
green open .triggered_watches-6 uE1_dKQeQxSI9Q4scCs6Yg 1 1 1 0 1.7mb 7kb
**green open ****_porterstem_minimal_english djATmntkSGW1gcRxgVfUgg 5 2 470809156 119711028 276.7gb 90.6gb**
green open .security-6 iVW38mvcR16O10lJ77OizQ 1 2 7 0 130.5kb 43.5kb
green open .monitoring-es-6-2019.07.17 C32br4NfS0myL3pf6eZKFA 1 1 220162 464 301.7mb 151mb
green open .kibana-6 4GEkQk5EToOJtWLxERGBkg 1 1 2 0 7.1kb 3.5kb
green open .watches-6 vKrCnjzIQHWWaduY5LGN6A 1 1 6 0 127.9kb 72.5kb
green open apm-6.8.1-onboarding-2019.07.18 mv6xfNSrQZq1aBA9s1uZyQ 1 2 1 0 17.8kb 5.9kb
green open .monitoring-kibana-6-2019.07.19 EyG7NNkqRHGHEsz6qftVuw 1 1 2093 0 1.7mb 1.1mb
green open .monitoring-es-6-2019.07.18 8LzdtOwKRyqgz_NHRDPMhA 1 1 235498 630 319.4mb 159.2mb
green open ****_porterstem_minimal_english4 YFmumZ_aTDSVU5sgOP4ZkA 5 1 0 0 2.5kb 1.2kb
green open .monitoring-kibana-6-2019.07.17 Cr_OAvPsTWabGs_QTJr46g 1 1 8640 0 4.6mb 2.3mb
green open .watcher-history-9-2019.07.17 5aPKdb3bSaiTD2vOMBQJsg 1 1 8640 0 24.8mb 12.3mb
green open .monitoring-alerts-6 N6oNBc9PRzG0NEQehe71RA 1 1 17 4 91.9kb 46.3kb
green open .watcher-history-9-2019.07.19 F-nYS0SASZWxBden2_dxow 1 1 2094 0 9mb 6mb
green open .kibana_task_manager OQauSQZeQ0eOye6jQGtzLA 1 1 2 0 25.1kb 12.5kb
@dadoonet the mentioned latency (4-6 secs) is indicating the "took" parameter only in the ES response