Problem ram.percentage goes 100% high with which gives slow dashboards
I have dedicated 62Gb of ram memory to Jvm as each server has 230Gb of inbuild ram
[root@elk-es-ho-10 ~]# curl -XGET 10.146.134.11:8888/_cat/nodes?v
ip heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
10.146.134.7 53 85 36 8.23 7.49 8.33 di - elk-es-ho-06
10.146.134.10 41 100 9 1.93 1.49 1.64 di - elk-es-ho-09
10.146.134.8 73 93 46 8.04 8.67 9.23 di - elk-es-ho-07
10.146.134.6 65 97 52 6.84 5.84 6.38 di - elk-es-ho-05
10.146.134.13 2 37 0 0.05 0.03 0.05 mi * elk-es-ho-12
10.146.134.11 2 36 0 0.00 0.01 0.05 i - elk-es-ho-10
10.146.134.14 2 36 0 0.00 0.01 0.05 mi - elk-es-ho-13
10.146.134.12 3 38 2 0.21 0.39 0.50 mi - elk-es-ho-11
10.146.134.24 69 94 40 6.01 7.46 8.04 di - elk-es-ho-04
Does this mean that you are generating 560 indices per day? Is this done with the default 5 primary shards and 1 replica? What is your retention period?
Having large number of small indices and shards is very inefficient as each shard comes with overhead. Try to organise your indices and sharing strategy so you get an average shard size between a few GB and a few tens of GB.
The recommendation is to have a heap size of around 31GB so you can benefit from compressed pointers. In order to utilise the resources available on larger servers, it is however not uncommon to instead run multiple Elasticsearch nodes.
Sorry for late replay was on Holiday
here is the report form curl -XGET 10.146.134.12:8888/_cat/shards?v
logstash-custom_ats_2-vector-2017.07.05 8 r STARTED 43189469 63.5gb 10.146.134.24
logstash-custom_ats_2-vector-2017.07.05 8 p STARTED 43189470 63.3gb 10.146.134.6
logstash-custom_ats_2-vector-2017.07.05 3 r STARTED 43195576 63.4gb 10.146.134.10
logstash-custom_ats_2-vector-2017.07.05 3 p STARTED 43195576 63.3gb 10.146.134.6
logstash-custom_ats_2-vector-2017.07.05 11 p STARTED 43178812 63.2gb 10.146.134.8
logstash-custom_ats_2-vector-2017.07.05 11 r STARTED 43178812 63.2gb 10.146.134.10
logstash-custom_ats_2-vector-2017.07.05 4 p STARTED 43190597 63.4gb 10.146.134.7
logstash-custom_ats_2-vector-2017.07.05 4 r STARTED 43190597 63.2gb 10.146.134.6
logstash-custom_ats_2-vector-2017.07.05 1 p STARTED 43182353 63.1gb 10.146.134.8
logstash-custom_ats_2-vector-2017.07.05 1 r STARTED 43182353 63.4gb 10.146.134.6
logstash-custom_ats_2-vector-2017.07.05 7 r STARTED 43184131 63.4gb 10.146.134.7
logstash-custom_ats_2-vector-2017.07.05 7 p STARTED 43184131 63.2gb 10.146.134.10
logstash-custom_ats_2-vector-2017.07.05 10 r STARTED 43190307 63.5gb 10.146.134.7
logstash-custom_ats_2-vector-2017.07.05 10 p STARTED 43190307 63.2gb 10.146.134.24
logstash-custom_ats_2-vector-2017.07.05 6 r STARTED 43185770 63.5gb 10.146.134.24
logstash-custom_ats_2-vector-2017.07.05 6 p STARTED 43185770 63.1gb 10.146.134.8
logstash-custom_ats_2-vector-2017.07.05 2 r STARTED 43192902 63.5gb 10.146.134.7
logstash-custom_ats_2-vector-2017.07.05 2 p STARTED 43192902 63.2gb 10.146.134.10
logstash-custom_ats_2-vector-2017.07.05 5 p STARTED 43187653 63.3gb 10.146.134.24
logstash-custom_ats_2-vector-2017.07.05 5 r STARTED 43187653 63.5gb 10.146.134.8
logstash-custom_ats_2-vector-2017.07.05 9 p STARTED 43196568 63.3gb 10.146.134.7
logstash-custom_ats_2-vector-2017.07.05 9 r STARTED 43196568 63.4gb 10.146.134.8
logstash-custom_ats_2-vector-2017.07.05 0 p STARTED 43189187 63.3gb 10.146.134.24
logstash-custom_ats_2-vector-2017.07.05 0 r STARTED 43189189 63.5gb 10.146.134.10
[root@elk-es-ho-11 ~]#
current shards are 12 shards and each shard arround 60Gb and and storing all events in one Index is arround 2.1 Tb of logs every day
logstash-puppet-vector-2017.07.04 9 r STARTED 695 359.7kb 10.146.134.7 elk-es-ho-06
logstash-puppet-vector-2017.07.04 9 p STARTED 695 401.3kb 10.146.134.6 elk-es-ho-05
logstash-puppet-vector-2017.07.04 1 p STARTED 624 298.3kb 10.146.134.7 elk-es-ho-06
logstash-puppet-vector-2017.07.04 1 r STARTED 624 351.8kb 10.146.134.8 elk-es-ho-07
logstash-puppet-vector-2017.07.04 11 p STARTED 717 408.5kb 10.146.134.7 elk-es-ho-06
logstash-puppet-vector-2017.07.04 11 r STARTED 717 408.8kb 10.146.134.8 elk-es-ho-07
logstash-puppet-vector-2017.07.04 6 p STARTED 675 331.7kb 10.146.134.7 elk-es-ho-06
logstash-puppet-vector-2017.07.04 6 r STARTED 675 401.3kb 10.146.134.6 elk-es-ho-05
logstash-puppet-vector-2017.07.04 10 r STARTED 657 382.5kb 10.146.134.24 elk-es-ho-04
logstash-puppet-vector-2017.07.04 10 p STARTED 657 368.6kb 10.146.134.10 elk-es-ho-09
logstash-puppet-vector-2017.07.04 4 r STARTED 704 322.7kb 10.146.134.24 elk-es-ho-04
logstash-puppet-vector-2017.07.04 4 p STARTED 704 403.4kb 10.146.134.6 elk-es-ho-05
logstash-puppet-vector-2017.07.04 8 p STARTED 669 362.5kb 10.146.134.8 elk-es-ho-07
logstash-puppet-vector-2017.07.04 8 r STARTED 669 350.8kb 10.146.134.10 elk-es-ho-09
logstash-puppet-vector-2017.07.04 7 r STARTED 698 309.9kb 10.146.134.7 elk-es-ho-06
logstash-puppet-vector-2017.07.04 7 p STARTED 698 350.6kb 10.146.134.24 elk-es-ho-04
logstash-puppet-vector-2017.07.04 0 r STARTED 686 354.3kb 10.146.134.24 elk-es-ho-04
logstash-puppet-vector-2017.07.04 0 p STARTED 686 382.4kb 10.146.134.10 elk-es-ho-09
logstash-teakd-vector-2017.06.29 9 r STARTED 4293057 4.2gb 10.146.134.24 elk-es-ho-04
logstash-teakd-vector-2017.06.29 9 p STARTED 4293057 4.2gb 10.146.134.8 elk-es-ho-07
logstash-teakd-vector-2017.06.29 10 r STARTED 4296013 4.2gb 10.146.134.24 elk-es-ho-04
logstash-teakd-vector-2017.06.29 10 p STARTED 4296013 4.2gb 10.146.134.10 elk-es-ho-09
logstash-teakd-vector-2017.06.29 11 r STARTED 4294355 4.2gb 10.146.134.10 elk-es-ho-09
logstash-teakd-vector-2017.06.29 11 p STARTED 4294355 4.2gb 10.146.134.6 elk-es-ho-05
logstash-teakd-vector-2017.06.29 5 r STARTED 4294367 4.1gb 10.146.134.7 elk-es-ho-06
logstash-teakd-vector-2017.06.29 5 p STARTED 4294367 4.2gb 10.146.134.10 elk-es-ho-09
logstash-teakd-vector-2017.06.29 1 r STARTED 4293620 4.1gb 10.146.134.7 elk-es-ho-06
logstash-teakd-vector-2017.06.29 1 p STARTED 4293620 4.1gb 10.146.134.6 elk-es-ho-05
logstash-teakd-vector-2017.06.29 2 p STARTED 4294387 4.1gb 10.146.134.7 elk-es-ho-06
logstash-teakd-vector-2017.06.29 2 r STARTED 4294387 4.2gb 10.146.134.8 elk-es-ho-07
logstash-teakd-vector-2017.06.29 7 p STARTED 4292952 4.2gb 10.146.134.7 elk-es-ho-06
logstash-teakd-vector-2017.06.29 7 r STARTED 4292952 4.2gb 10.146.134.6 elk-es-ho-05
logstash-teakd-vector-2017.06.29 8 p STARTED 4294534 4.2gb 10.146.134.24 elk-es-ho-04
logstash-teakd-vector-2017.06.29 8 r STARTED 4294534 4.1gb 10.146.134.8 elk-es-ho-07
logstash-teakd-vector-2017.06.29 4 p STARTED 4294611 4.1gb 10.146.134.8 elk-es-ho-07
logstash-teakd-vector-2017.06.29 4 r STARTED 4294611 4.2gb 10.146.134.6 elk-es-ho-05
logstash-teakd-vector-2017.06.29 3 p STARTED 4292768 4.2gb 10.146.134.24 elk-es-ho-04
logstash-teakd-vector-2017.06.29 3 r STARTED 4292768 4.1gb 10.146.134.8 elk-es-ho-07
logstash-teakd-vector-2017.06.29 6 r STARTED 4295711 4.2gb 10.146.134.10 elk-es-ho-09
logstash-teakd-vector-2017.06.29 6 p STARTED 4295711 4.2gb 10.146.134.6 elk-es-ho-05
logstash-teakd-vector-2017.06.29 0 r STARTED 4291871 4.2gb 10.146.134.24 elk-es-ho-04
logstash-teakd-vector-2017.06.29 0 p STARTED 4291871 4.2gb 10.146.134.10 elk-es-ho-09
logstash-error-vector-2017.06.26 11 p STARTED 771885 181.9mb 10.146.134.10 elk-es-ho-09
logstash-custom_ats_2-vector-2017.07.05 3 r STARTED 43205422 63.4gb 10.146.134.10 elk-es-ho-09
logstash-custom_ats_2-vector-2017.07.05 3 p STARTED 43205422 63.4gb 10.146.134.6 elk-es-ho-05
logstash-custom_ats_2-vector-2017.07.05 11 p STARTED 43188652 63.2gb 10.146.134.8 elk-es-ho-07
logstash-custom_ats_2-vector-2017.07.05 11 r STARTED 43188653 63.2gb 10.146.134.10 elk-es-ho-09
logstash-custom_ats_2-vector-2017.07.05 4 p STARTED 43200448 63.4gb 10.146.134.7 elk-es-ho-06
One of your indices is very much larger than all the others and could probably benefit from a larger number of primary shards. Most of your indices do however have relatively little data and are therefore oversharded. For most of them one or two primary shards would be sufficient and most likely more efficient.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.