I observed that during snapshots, systems run GC.
Here are some example GC logs that happened during snapshots.
Node 1:
[2018-10-14T06:01:56,133][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121610] overhead, spent [257ms] collecting in the last [1s]
[2018-10-14T06:02:09,020][WARN ][o.e.m.j.JvmGcMonitorService] [n1] [gc][old][1121612][216] duration [11.3s], collections [1]/[11.8s], total [11.3s]/[4.4m], memory [18.5gb]->[8.3gb]/[29.8gb], all_pools {[young] [795.3mb]->[221.8mb]/
[1.4gb]}{[survivor] [118.7mb]->[0b]/[191.3mb]}{[old] [17.6gb]->[8gb]/[28.1gb]}
[2018-10-14T06:02:09,020][WARN ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121612] overhead, spent [11.5s] collecting in the last [11.8s]
[2018-10-14T06:06:22,978][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121865] overhead, spent [312ms] collecting in the last [1s]
[2018-10-14T06:06:23,978][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121866] overhead, spent [463ms] collecting in the last [1.2s]
[2018-10-14T06:06:25,090][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121867] overhead, spent [313ms] collecting in the last [1.1s]
[2018-10-14T06:06:26,102][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121868] overhead, spent [343ms] collecting in the last [1s]
[2018-10-14T06:06:51,282][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121893] overhead, spent [367ms] collecting in the last [1s]
[2018-10-14T06:06:53,283][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121895] overhead, spent [256ms] collecting in the last [1s]
[2018-10-14T06:07:06,347][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121908] overhead, spent [255ms] collecting in the last [1s]
[2018-10-14T06:07:07,410][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121909] overhead, spent [282ms] collecting in the last [1s]
[2018-10-14T06:07:10,583][INFO ][o.e.m.j.JvmGcMonitorService] [n1] [gc][1121912] overhead, spent [348ms] collecting in the last [1.1s]
Node 2:
[2018-10-14T06:06:16,697][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][old][19089995][4491] duration [7s], collections [1]/[8.4s], total [7s]/[1h], memory [18.5gb]->[9.9gb]/[29.8gb], all_pools {[young] [738.3mb]->[66mb]/[1.4gb]}{[
survivor] [149.9mb]->[0b]/[191.3mb]}{[old] [17.6gb]->[9.8gb]/[28.1gb]}
[2018-10-14T06:06:16,699][WARN ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19089995] overhead, spent [7.4s] collecting in the last [8.4s]
[2018-10-14T06:06:25,703][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19090004] overhead, spent [281ms] collecting in the last [1s]
[2018-10-14T06:06:26,706][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19090005] overhead, spent [254ms] collecting in the last [1s]
2018-10-14T06:07:07,981][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19090046] overhead, spent [277ms] collecting in the last [1s]
[2018-10-14T06:07:09,981][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19090048] overhead, spent [299ms] collecting in the last [1s]
[2018-10-14T06:07:22,186][INFO ][o.e.m.j.JvmGcMonitorService] [n2] [gc][19090060] overhead, spent [294ms] collecting in the last [1s]
We can see that old GCs happened. What I found is that if a node did old GCs before a snapshot, it survives old GC an gets many non-old GC after the snapshot time...
Can this affect the system performance? and is there a way to tune system to not GC during creating snapshot? For example, can we force a node run old GC before snapshot?