[WARN ][o.e.m.j.JvmGcMonitorService] [gc][212067] overhead

Recently we rolled out new Search functionality to Production and from the same day we started seeing Elasticsearch 5.3.0 going down every other day.
I allocated 30GB for JVM heap and we are running elasticsearch on Solid state disc.
We are using Elasticsearch from 4 years and haven't seen this exception. any idea why and how to resolve this issue ?

[2018-08-22T20:29:58,308][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212060][23685] duration [3.1s], collections [2]/[3.7s], total [3.1s]/[13.5m], memory [11gb]->[11.2gb]/[29.8gb], all_pools {[young] [1.3gb]->[2.6mb]/[1.4gb]}{[survivor] [3.4mb]->[191.3mb]/[191.3mb]}{[old] [9.6gb]->[11.1gb]/[28.1gb]}
[2018-08-22T20:29:58,308][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212060] overhead, spent [3.1s] collecting in the last [3.7s]
[2018-08-22T20:30:00,824][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212061][23686] duration [2.4s], collections [1]/[2.5s], total [2.4s]/[13.5m], memory [11.2gb]->[12.7gb]/[29.8gb], all_pools {[young] [2.6mb]->[7.7mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [11.1gb]->[12.5gb]/[28.1gb]}
[2018-08-22T20:30:00,824][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212061] overhead, spent [2.4s] collecting in the last [2.5s]
[2018-08-22T20:30:03,027][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212062][23687] duration [2.1s], collections [1]/[2.2s], total [2.1s]/[13.6m], memory [12.7gb]->[14gb]/[29.8gb], all_pools {[young] [7.7mb]->[67.3mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [12.5gb]->[13.7gb]/[28.1gb]}
[2018-08-22T20:30:03,027][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212062] overhead, spent [2.1s] collecting in the last [2.2s]
[2018-08-22T20:30:05,668][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212063][23688] duration [2.5s], collections [1]/[2.6s], total [2.5s]/[13.6m], memory [14gb]->[15.4gb]/[29.8gb], all_pools {[young] [67.3mb]->[30.2mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [13.7gb]->[15.2gb]/[28.1gb]}
[2018-08-22T20:30:05,668][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212063] overhead, spent [2.5s] collecting in the last [2.6s]
[2018-08-22T20:30:08,043][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212064][23689] duration [2.2s], collections [1]/[2.3s], total [2.2s]/[13.6m], memory [15.4gb]->[16.7gb]/[29.8gb], all_pools {[young] [30.2mb]->[19.1mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [15.2gb]->[16.5gb]/[28.1gb]}
[2018-08-22T20:30:08,043][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212064] overhead, spent [2.2s] collecting in the last [2.3s]
[2018-08-22T20:30:10,465][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212065][23690] duration [2.3s], collections [1]/[2.4s], total [2.3s]/[13.7m], memory [16.7gb]->[18.2gb]/[29.8gb], all_pools {[young] [19.1mb]->[30.7mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [16.5gb]->[17.9gb]/[28.1gb]}
[2018-08-22T20:30:10,465][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212065] overhead, spent [2.3s] collecting in the last [2.4s]
[2018-08-22T20:30:12,872][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212066][23691] duration [2.2s], collections [1]/[2.4s], total [2.2s]/[13.7m], memory [18.2gb]->[19.5gb]/[29.8gb], all_pools {[young] [30.7mb]->[27mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [17.9gb]->[19.3gb]/[28.1gb]}
[2018-08-22T20:30:12,872][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212066] overhead, spent [2.2s] collecting in the last [2.4s]
[2018-08-22T20:30:15,231][WARN ][o.e.m.j.JvmGcMonitorService] [gc][young][212067][23692] duration [2.2s], collections [1]/[2.3s], total [2.2s]/[13.8m], memory [19.5gb]->[20.9gb]/[29.8gb], all_pools {[young] [27mb]->[22mb]/[1.4gb]}{[survivor] [191.3mb]->[191.3mb]/[191.3mb]}{[old] [19.3gb]->[20.7gb]/[28.1gb]}
[2018-08-22T20:30:15,231][WARN ][o.e.m.j.JvmGcMonitorService] [gc][212067] overhead, spent [2.2s] collecting in the last [2.3s]
[2018-08-22T20:30:15,231][ERROR][o.e.x.m.c.c.ClusterStatsCollector] collector [cluster-stats-collector] timed out when collecting data

--Thanks

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.