Hi, we are having ELK 5.4.3 one node, one daily indice, timeseries data with no changes in them. One indice has around 25 GB data and it's stored in 2 shards.
Few times a day I can see this warning in elasticsearch log:
[2019-01-07T09:35:25,984][WARN ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][young][7635766]
[3393290] duration [2.2s], collections [1]/[2.6s], total [2.2s]/[1.9d], memory [8.4gb]->[8.4gb]/[12.3gb],
all_pools {[young] [2.1mb]->[1mb]/[865.3mb]}{[survivor] [99.5mb]->[101.9mb]/[108.1mb]}{[old]
[8.3gb]->[8.3gb]/[11.4gb]}
[2019-01-07T09:35:25,984][WARN ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][7635766] overhead,
spent [2.2s] collecting in the last [2.6s]
[2019-01-07T09:38:29,660][WARN ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][old][7635767][784]
duration [3m], collections [1]/[3m], total [3m]/[1.1h], memory [8.4gb]->[2.5gb]/[12.3gb], all_pools
{[young] [1mb]->[9.9mb]/[865.3mb]}{[survivor] [101.9mb]->[0b]/[108.1mb]}{[old] [8.3gb]->
[2.5gb]/[11.4gb]}
[2019-01-07T09:38:29,660][WARN ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][7635767] overhead,
spent [3m] collecting in the last [3m]
[2019-01-07T09:41:36,064][INFO ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][young][7635951]
[3393561] duration [1.4s], collections [2]/[2.2s], total [1.4s]/[1.9d], memory [8.8gb]->[8.4gb]/[12.3gb],
all_pools {[young] [398mb]->[4.8mb]/[865.3mb]}{[survivor] [108.1mb]->[64.2mb]/[108.1mb]}{[old]
[8.3gb]->[8.4gb]/[11.4gb]}
[2019-01-07T09:41:36,064][WARN ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][7635951] overhead,
spent [1.4s] collecting in the last [2.2s]
[2019-01-07T10:09:12,518][INFO ][o.e.c.m.MetaDataMappingService] [Bpmy3wE] [exceeded-
2019.01.07/Op9v4qCiSy2CakCtRlRApA] update_mapping [sbc_event]
[2019-01-07T10:52:15,846][INFO ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][7640173] overhead,
spent [439ms] collecting in the last [1.2s]
[2019-01-07T12:13:15,629][INFO ][o.e.c.m.MetaDataDeleteIndexService] [Bpmy3wE] [logstash-
2019.01.03/pwdQEa96Rai9bXSaxWY5aA] deleting index
[2019-01-07T12:13:32,873][INFO ][o.e.c.m.MetaDataDeleteIndexService] [Bpmy3wE] [exceeded-
2019.01.04/aE68UAMcR_e8vf1RQHA1sg] deleting index
[2019-01-07T12:13:34,780][INFO ][o.e.c.m.MetaDataDeleteIndexService] [Bpmy3wE] [collectd-
2019.01.04/AfQVhKohRGmRZnKdnOuU3Q] deleting index
[2019-01-07T15:01:55,541][INFO ][o.e.m.j.JvmGcMonitorService] [Bpmy3wE] [gc][7654993] overhead,
spent [252ms] collecting in the last [1s]
Elastic (and kibana) is so slow to work with. No problem with disk space, heap size is set to
-Xmx13g
-Xms13g
Any idea what else to check? Maybe segmnetation? Elasticsearch can't handle too much data income? Segmentation has default values.