Hi,
I've been experiencing long term issues with Elasticsearch and one of my 3 node setups running out of heap space (the specific node varies each time it seems). The specifics of my setup are as follows:
- OS: Windows 2012 R2
- ES version: 6.2.1
- Logstash version: 6.2.1
- Kibana version: 6.2.1
- Winlogbeat version: 6.2.1
- Number of nodes: ES: 3 - LS: 1 - KB: 1
- JVM Heap size on each ES node: 4GB - Total: 12GB
- Number of shards per index: 3 (1 primary, 2 replica)
- Index timescale: 4 indices per day.
- Total shards per day: 12
- Retention period before curating: 1 week (total shards per week: 84). Currently the number of shards doesn't grow beyond this before being deleted. In time I will increase this time frame.
- Java version on ES nodes: 1.8.0_144-b01
- Java version on LS: 1.8.0_151-b12
Despite setting all the index templates to 1 primary shard & 2 replicas (in the hope this would reduce the load on the JVM heap), this hasn't happened. The heap is still climbing so high one usually one ES node (varies as to which one), the Garbage Collection is timing out before eventually bringing down the node itself. The only way to get things working again is to restart the host or to force the cluster routing to enable as shown below:
curl -XPUT 'es-01.myserver.com:9200/_cluster/settings?pretty' -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.enable": "all"
}
}
'
The daily indices I am building are tiny, ranging from 300Kb to no more than 3GB. I've disabled all dashboards in Kibana so nothing is pulling data from the nodes. Still JVM heap space is climbing and errors such as:
[2018-02-26T11:10:09,608][INFO ][o.e.m.j.JvmGcMonitorService] [es-01] [gc][5867] overhead, spent [351ms] collecting in the last [1s]
are starting to fill the ES nodes logs as it slowly gets more overloaded.
I'm at a loss now as to explain why. I was pretty confident that the reduction of shards per index would resolve the issue but it seems not so if anyone can suggest anything else I can try, I'd appreciate the help.
Thanks.