Dear Fellow,
I've counter the problem in my elasticsearch container. The problem is seems like elasticsearch cannot load the index and it is exhausted my resource. Here's what I found when seeing docker container stats
CONTAINER CPU % MEM USAGE/LIMIT MEM % NET I/O
elk_elasticsearch_1 119.52% 1.395 GB/3.946 GB 35.34% 33.22 MB/8.596 MB
elk_kibana_1 0.00% 147.5 MB/629.1 MB 23.44% 1.948 MB/6.108 MB
My configuration is the default configuration from Elasticsearch. And I hosted it in my C4.Large EC2 instance. I used Elasticsearch version 2.2.1
This is the error from my Elasticsearch container :
[2016-03-28 15:49:23,207][DEBUG][action.search.type ] [Death] [production-api_log-2016.02.29][4], node[c3bWhUvBR52kYNwpL9JXhA], [P], v[30], s[STARTED], a[id=B73N_Y4kTsi80uus31ZzgA]: Failed to execute [org.elasticsearch.action.search.SearchRequest@41f0b9fb] lastShard [true]
RemoteTransportException[[Death][172.17.0.29:9300][indices:data/read/search[phase/query]]]; nested: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@51f4345d on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@2c63c5eb[Running, pool size = 4, active threads = 4, queued tasks = 1000, completed tasks = 92657]]];
Caused by: EsRejectedExecutionException[rejected execution of org.elasticsearch.transport.TransportService$4@51f4345d on EsThreadPoolExecutor[search, queue capacity = 1000, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@2c63c5eb[Running, pool size = 4, active threads = 4, queued tasks = 1000, completed tasks = 92657]]]
at org.elasticsearch.common.util.concurrent.EsAbortPolicy.rejectedExecution(EsAbortPolicy.java:50)
at java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:823)
at java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1369)
at org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor.execute(EsThreadPoolExecutor.java:85)
at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:346)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:310)
at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:282)
at org.elasticsearch.search.action.SearchServiceTransportAction.sendExecuteQuery(SearchServiceTransportAction.java:142)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction$AsyncAction.sendExecuteFirstPhase(TransportSearchQueryThenFetchAction.java:85)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.performFirstPhase(TransportSearchTypeAction.java:166)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.start(TransportSearchTypeAction.java:148)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:64)
at org.elasticsearch.action.search.type.TransportSearchQueryThenFetchAction.doExecute(TransportSearchQueryThenFetchAction.java:53)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:99)
at org.elasticsearch.action.search.TransportSearchAction.doExecute(TransportSearchAction.java:44)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)
I've search so many post in this discussion, but didn't find similar problem.
I've just hitting the wall now, and our production logging is in trouble now.
please help me friend.