I'm regularly getting my Elasticsearch instance go into continuous garbage collection with log entries like these every 3 seconds:
[2017-03-29T15:23:29,060][WARN ][o.e.m.j.JvmGcMonitorService] [3BURiFV] [gc][110] overhead, spent [1.8s] collecting in the last [1.8s]
It uses a lot of CPU and becomes unresponsive when this happens. There is just a single node.
I am only trying to store some Logstash and Metricbeat data from a few hosts - I don't think this would be considered a large amount of data, though perhaps it is. The /_stats says total data is 22Gb.
I am running the JVM (32-bit) with -Xms1g -Xmx1g.
Is this simply not enough memory? What should I do?
thanks
Hamish