Hello Guys,
I am facing this issue right now. My cluster is on green state, but all index and search actions are too slow.
Follow my master last log lines:
[2017-06-21T16:01:58,026][WARN ][o.e.a.a.c.n.s.TransportNodesStatsAction] [SCCHIB4ESCB-10] not accumulating exceptions, excluding exception from response
org.elasticsearch.action.FailedNodeException: Failed node [-d72YxtkQSWtRryyUfhFEA]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:246) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:160) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:218) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:933) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [SCCHIB4ESCB-01][172.17.55.33:9300][cluster:monitor/nodes/stats[n]] request_id [953212] timed out after [15000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:934) ~[elasticsearch-5.4.1.jar:5.4.1]
... 4 more
[2017-06-21T16:02:37,642][WARN ][o.e.t.TransportService ] [SCCHIB4ESCB-10] Received response for a request that has timed out, sent [54616ms] ago, timed out [39616ms] ago, action [cluster:monitor/nodes/stats[n]], node [{SCCHIB4ESCB-01}{-d72YxtkQSWtRryyUfhFEA}{tgx7hZTgQ8aJBNOxr0IDlQ}{172.17.55.33}{172.17.55.33:9300}], id [953212]
[2017-06-21T16:04:48,025][WARN ][o.e.a.a.c.n.s.TransportNodesStatsAction] [SCCHIB4ESCB-10] not accumulating exceptions, excluding exception from response
org.elasticsearch.action.FailedNodeException: Failed node [-d72YxtkQSWtRryyUfhFEA]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:246) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:160) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:218) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:933) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [SCCHIB4ESCB-01][172.17.55.33:9300][cluster:monitor/nodes/stats[n]] request_id [953538] timed out after [15000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:934) ~[elasticsearch-5.4.1.jar:5.4.1]
... 4 more
[2017-06-21T16:05:48,027][WARN ][o.e.a.a.c.n.s.TransportNodesStatsAction] [SCCHIB4ESCB-10] not accumulating exceptions, excluding exception from response
org.elasticsearch.action.FailedNodeException: Failed node [-d72YxtkQSWtRryyUfhFEA]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:246) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:160) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:218) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1050) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:933) [elasticsearch-5.4.1.jar:5.4.1]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:569) [elasticsearch-5.4.1.jar:5.4.1]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]
Caused by: org.elasticsearch.transport.ReceiveTimeoutTransportException: [SCCHIB4ESCB-01][172.17.55.33:9300][cluster:monitor/nodes/stats[n]] request_id [953650] timed out after [15000ms]
at org.elasticsearch.transport.TransportService$TimeoutHandler.run(TransportService.java:934) ~[elasticsearch-5.4.1.jar:5.4.1]
... 4 more
[2017-06-21T16:05:57,061][WARN ][o.e.t.TransportService ] [SCCHIB4ESCB-10] Received response for a request that has timed out, sent [84036ms] ago, timed out [69036ms] ago, action [cluster:monitor/nodes/stats[n]], node [{SCCHIB4ESCB-01}{-d72YxtkQSWtRryyUfhFEA}{tgx7hZTgQ8aJBNOxr0IDlQ}{172.17.55.33}{172.17.55.33:9300}], id [953538]
[2017-06-21T16:05:57,061][WARN ][o.e.t.TransportService ] [SCCHIB4ESCB-10] Received response for a request that has timed out, sent [24034ms] ago, timed out [9034ms] ago, action [cluster:monitor/nodes/stats[n]], node [{SCCHIB4ESCB-01}{-d72YxtkQSWtRryyUfhFEA}{tgx7hZTgQ8aJBNOxr0IDlQ}{172.17.55.33}{172.17.55.33:9300}], id [953650]