Thank you @fkelbert. Disabling swap modifying /etc/fstab
improved time spent doing gc but did not solve completely. Now elasticsearch stays up and running but Kibana looks like freezed and shows updated data after more than 40 seconds (and sometimes it goes on request Timeout 30seconds).
From the cluster logs I can see that it is spending less time in gc tasks:
[2018-09-25T13:58:34,940][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][476] overhead, spent [500ms] collecting in the last [1s]
[2018-09-25T13:58:36,058][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][477] overhead, spent [660ms] collecting in the last [1.1s]
[2018-09-25T13:58:37,109][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][478] overhead, spent [614ms] collecting in the last [1s]
[2018-09-25T13:58:38,111][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][479] overhead, spent [551ms] collecting in the last [1s]
[2018-09-25T13:58:39,113][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][480] overhead, spent [582ms] collecting in the last [1s]
[2018-09-25T13:58:40,113][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][481] overhead, spent [586ms] collecting in the last [1s]
[2018-09-25T13:58:41,113][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][482] overhead, spent [577ms] collecting in the last [1s]
[2018-09-25T13:58:42,114][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][483] overhead, spent [581ms] collecting in the last [1s]
[2018-09-25T13:58:43,119][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][484] overhead, spent [594ms] collecting in the last [1s]
[2018-09-25T13:58:44,121][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][485] overhead, spent [609ms] collecting in the last [1s]
[2018-09-25T13:58:45,178][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][486] overhead, spent [641ms] collecting in the last [1s]
[2018-09-25T13:58:46,236][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][487] overhead, spent [634ms] collecting in the last [1s]
[2018-09-25T13:58:47,321][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][488] overhead, spent [658ms] collecting in the last [1s]
[2018-09-25T13:58:48,390][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][489] overhead, spent [653ms] collecting in the last [1s]
[2018-09-25T13:58:49,476][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][490] overhead, spent [657ms] collecting in the last [1s]
[2018-09-25T13:59:02,537][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][old][491][4] duration [12.5s], collections [1]/[13s], total [12.5s]/[12.8s], memory [5.7gb]->[3.9gb]/[5.9gb], all_pools {[young] [13.3mb]->[12.2mb]/[133.1mb]}{[survivor] [16.6mb]->[0b]/[16.6mb]}{[old] [5.7gb]->[3.9gb]/[5.8gb]}
[2018-09-25T13:59:02,537][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][491] overhead, spent [12.7s] collecting in the last [13s]
[2018-09-25T13:59:03,538][INFO ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][492] overhead, spent [491ms] collecting in the last [1s]
[2018-09-25T13:59:04,564][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][493] overhead, spent [716ms] collecting in the last [1s]
[2018-09-25T13:59:05,619][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][494] overhead, spent [633ms] collecting in the last [1s]
[2018-09-25T13:59:06,687][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][495] overhead, spent [622ms] collecting in the last [1s]
[2018-09-25T13:59:07,748][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][496] overhead, spent [640ms] collecting in the last [1s]
[2018-09-25T13:59:08,826][WARN ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][497] overhead, spent [635ms] collecting in the last [1s]
[2018-09-25T13:59:09,836][INFO ][o.e.m.j.JvmGcMonitorService] [o-KoSvH] [gc][498] overhead, spent [353ms] collecting in the last [1s]
but it is not enough. I don't know why this happens on 6.x and never happened on 5.6.
Is there something else I can do? Maybe on kibana side?
Thank you!