Elastic server not running after server reboot. i don't know why

**elastic **

[2018-11-15T16:04:13,149][DEBUG][o.e.a.s.TransportSearchAction] [cloudoc1] All shards failed for phase: [query]
[2018-11-15T16:04:13,180][WARN ][r.suppressed ] path: /.kibana/_search, params: {ignore_unavailable=true, index=.kibana, filter_path=aggregations.types.buckets}
org.elasticsearch.action.search.SearchPhaseExecutionException: all shards failed
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseFailure(AbstractSearchAsyncAction.java:293) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.executeNextPhase(AbstractSearchAsyncAction.java:133) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onPhaseDone(AbstractSearchAsyncAction.java:254) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.search.InitialSearchPhase.onShardFailure(InitialSearchPhase.java:101) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.search.InitialSearchPhase.lambda$performPhaseOnShard$1(InitialSearchPhase.java:210) ~[elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.action.search.InitialSearchPhase$1.doRun(InitialSearchPhase.java:189) [elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:723) [elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.TimedRunnable.doRun(TimedRunnable.java:41) [elasticsearch-6.4.2.jar:6.4.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-6.4.2.jar:6.4.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [?:1.8.0_111]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [?:1.8.0_111]
at java.lang.Thread.run(Thread.java:745) [?:1.8.0_111]

** kibana **

[07:06:03.404] [warning][stats-collection] [search_phase_execution_exception] all shards failed :: {"path":"/.kibana/_search","query":{"ignore_unavailable":true,"filter_path":"aggregations.types.buckets"},"body":"{"size":0,"query":{"terms":{"type":["dashboard","visualization","search","index-pattern","graph-workspace","timelion-sheet"]}},"aggs":{"types":{"terms":{"field":"type","size":6}}}}","statusCode":503,"response":"{"error":{"root_cause":,"type":"search_phase_execution_exception","reason":"all shards failed","phase":"query","grouped":true,"failed_shards":},"status":503}"}
at respond (C:\Users\Administrator\Desktop\kibana01\node_modules\elasticsearch\src\lib\transport.js:307:15)
at checkRespForFailure (C:\Users\Administrator\Desktop\kibana01\node_modules\elasticsearch\src\lib\transport.js:266:7)
at HttpConnector. (C:\Users\Administrator\Desktop\kibana01\node_modules\elasticsearch\src\lib\connectors\http.js:159:7)
at IncomingMessage.bound (C:\Users\Administrator\Desktop\kibana01\node_modules\elasticsearch\node_modules\lodash\dist\lodash.js:729:21)
at emitNone (events.js:111:20)
at IncomingMessage.emit (events.js:208:7)
at endReadableNT (_stream_readable.js:1064:12)
at _combinedTickCallback (internal/process/next_tick.js:138:11)
at process._tickDomainCallback (internal/process/next_tick.js:218:9)
log [07:06:03.404] [warning][stats-collection] Unable to fetch data from kibana collector

** elasticsearch.yml **
cluster.name: cloudocCluster
node.name: cloudoc1
network.host: 192.168.1.99
http.port: 9200

** kibana.yml **
server.host: "192.168.1.99"
server.name: "cloudockibana"
elasticsearch.url: "http://192.168.1.99:9200"

What does the output from _cat/health show?

image

Please don't post pictures of text, they are difficult to read and some people may not be even able to see them.

You have far too many shards for a cluster that size. Please read this blog post on shards and sharding and try to reduce that significantly, e.g. by reindexing into fewer larger indices.

epoch      timestamp cluster        status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1542267197 16:33:17  cloudocCluster red             1         1   1296 1296    0    4     9285             8               3.1s                 12.2%
ip           heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
192.168.1.99           33          68  71                          mdi       *      cloudoc1

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.