Good day.
I have ES 2.1.0 with Kibana in single node demo environment. After unclean shutdown, .kibana
index refuses to come up because of the following error in the logs:
[2016-01-04 18:52:06,313][WARN ][cluster.action.shard ] [One Above All] [.kibana][0] received shard failed for [.kibana][0], node[gF-MmFXhRD6RpzhoX4j0fw], [P], v[480], s[INITIALIZING], a[id=WOf4yH4-SRql1VcU667asQ], unassigned_info[[reason=ALLOCATION_FAILED], at[2016-01-04T16:52:06.306Z], details[failed recovery, failure IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/translog-44.tlog]; ]], indexUUID [fUvsCJ_XTMy29AhHwArEww], message [failed recovery], failure [IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/translog-44.tlog]; ]
[.kibana][[.kibana][0]] IndexShardRecoveryException[failed to recovery from gateway]; nested: EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/translog-44.tlog];
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:254)
at org.elasticsearch.index.shard.StoreRecoveryService.access$100(StoreRecoveryService.java:56)
at org.elasticsearch.index.shard.StoreRecoveryService$1.run(StoreRecoveryService.java:129)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: [.kibana][[.kibana][0]] EngineCreationFailureException[failed to create engine]; nested: NoSuchFileException[/var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/translog-44.tlog];
at org.elasticsearch.index.engine.InternalEngine.<init>(InternalEngine.java:156)
at org.elasticsearch.index.engine.InternalEngineFactory.newReadWriteEngine(InternalEngineFactory.java:25)
at org.elasticsearch.index.shard.IndexShard.newEngine(IndexShard.java:1408)
at org.elasticsearch.index.shard.IndexShard.createNewEngine(IndexShard.java:1403)
at org.elasticsearch.index.shard.IndexShard.internalPerformTranslogRecovery(IndexShard.java:906)
at org.elasticsearch.index.shard.IndexShard.performTranslogRecovery(IndexShard.java:883)
at org.elasticsearch.index.shard.StoreRecoveryService.recoverFromStore(StoreRecoveryService.java:245)
... 5 more
Caused by: java.nio.file.NoSuchFileException: /var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/translog-44.tlog
at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
at sun.nio.fs.UnixFileSystemProvider.newFileChannel(UnixFileSystemProvider.java:177)
...
(I've cut the last lines because of post size exception limit. The full exception is here: http://pastebin.com/raw/zWAjwxx3)
The message keeps filling the log file and eating up gigabytes of disk space (even after upgrading to ES 2.1.1)
The file indeed does not exist. Here are the contents of /var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog/
$ ls -lah /var/lib/elasticsearch/zaar/nodes/0/indices/.kibana/0/translog
total 112K
drwxr-xr-x 2 elasticsearch elasticsearch 4.0K Jan 4 18:56 .
drwxr-xr-x 5 elasticsearch elasticsearch 4.0K Jan 4 17:25 ..
-rw-r--r-- 1 elasticsearch elasticsearch 20 Jan 3 19:18 translog-43.ckp
-rw-r--r-- 1 elasticsearch elasticsearch 96K Jan 3 19:12 translog-43.tlog
-rw-r--r-- 1 elasticsearch elasticsearch 20 Jan 3 19:19 translog.ckp
What I have tried so far:
- Upgrade to 2.1.1
- Remove translog files fully or partially (ES was stopped when I've fiddled with the files)
Nothing helped.
How can I tell force ES to start with whatever there is the in the index and start transaction log from scratch?
The index data itself seems to be fine. Please don't tell me that one unclean shutdown can kill the whole index.
I've checked the index itself using the following: http://pastebin.com/raw/LnBDtJXP
Thank you in advance!