Recovery failed, no activity after 30m, networking seems fine

Last night I started restoring a snapshot of 12 small indices to a 6-node cluster (v.2.3.1) running on m4.2xlarge AWS instance types, each with 2TB disk.

The shards won't allocate to any node.

Every node has plenty of available disk (at last 1TB each).
All nodes are master-eligible, data nodes that also accept client comms. They all have marvel and kibana running on them.

The errors I'm seeing in the logs look like this:

[2017-11-06 04:20:55,982][WARN ][indices.cluster          ] [es-live-6] [[metrics-persistent-2017-01][4]] marking and sending shard failed due to [failed recovery]
RecoveryFailedException[[metrics-persistent-2017-01][4]: Recovery failed from {es-live-5}{nJSjNL5GRo-N-cI0YH5RbQ}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true} into {es-live-6}{PGXmWTURQ1eQZS_5E1k2Kw}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true} (no activity after [30m])]; nested: ElasticsearchTimeoutException[no activity after [30m]];
	at org.elasticsearch.indices.recovery.RecoveriesCollection$RecoveryMonitor.doRun(RecoveriesCollection.java:236)
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:748)
Caused by: ElasticsearchTimeoutException[no activity after [30m]]

I can successfully cURL the addresses and ports (obvs port 9300 just returns with This is not a HTTP port, as expected), so assuming that networking is fine given that fact.

All nodes are showing very high ram.percent (>90 on all of them), but average heap.percent (around the 50s).
netadata shows the system RAM usage as not being particularly high.

Hot threads look like this:

::: {es-live-7}{OPbLZrE8RHyC0-Eva3WaGQ}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.633Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
    0.0% (99.3micros out of 500ms) cpu usage by thread 'elasticsearch[es-live-7][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)
       org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
       org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
       org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
       java.lang.Thread.run(Thread.java:748)

::: {es-live-9}{r79b4XstTAm0UKfzdxHaOw}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.634Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
    0.0% (76.4micros out of 500ms) cpu usage by thread 'elasticsearch[es-live-9][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)
       org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
       org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
       org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
       java.lang.Thread.run(Thread.java:748)

::: {es-live-8}{0lcMCUmdRwqr6NaAau5Ehg}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.633Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
    0.0% (71.7micros out of 500ms) cpu usage by thread 'elasticsearch[es-live-8][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)
       org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
       org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
       org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
       java.lang.Thread.run(Thread.java:748)

::: {es-live-5}{nJSjNL5GRo-N-cI0YH5RbQ}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.631Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:
   
    0.0% (94.1micros out of 500ms) cpu usage by thread 'elasticsearch[es-live-5][transport_client_timer][T#1]{Hashed wheel timer #1}'
     10/10 snapshots sharing following 5 elements
       java.lang.Thread.sleep(Native Method)
       org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
       org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
       org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
       java.lang.Thread.run(Thread.java:748)

::: {es-live-6}{PGXmWTURQ1eQZS_5E1k2Kw}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.633Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

::: {es-live-10}{29trlJX5QoiCub0xEon4xw}{XXX.XXX.XXX.XXX}{XXX.XXX.XXX.XXX:9300}{master=true}
   Hot threads at 2017-11-06T14:13:01.633Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

I've even tried running a script to manually route the shards to a node. It allocates the shards, they spend an inordinate amount of time in the INITIALIZING state, and then eventually go back to UNASSIGNED.

Some of these indices are only a few KB.

What is going on here? Why won't the shards allocate?

Thanks for any help!

I'm also seeing this sort of message a lot in the logs on at least 3 of the nodes:

[2017-11-06 14:34:18,928][DEBUG][index.shard              ] [es-live-10] [metrics-persistent-2017-11][2] updateBufferSize: engine is closed; skipping

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.