Continuous garbage collection

I'm regularly getting my Elasticsearch instance go into continuous garbage collection with log entries like these every 3 seconds:

[2017-03-29T15:23:29,060][WARN ][o.e.m.j.JvmGcMonitorService] [3BURiFV] [gc][110] overhead, spent [1.8s] collecting in the last [1.8s]

It uses a lot of CPU and becomes unresponsive when this happens. There is just a single node.

I am only trying to store some Logstash and Metricbeat data from a few hosts - I don't think this would be considered a large amount of data, though perhaps it is. The /_stats says total data is 22Gb.

I am running the JVM (32-bit) with -Xms1g -Xmx1g.

Is this simply not enough memory? What should I do?

thanks

Hamish

1GB is a very small heap. Based on your description it sounds like you need a box with more RAM and heap.

1 Like

I don't mind if queries are slow, as I'm stashing a lot of log data but rarely accessing it.

I've deleted some old data and closed some old indices and it's running again for now.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.