Elasticsearch index health turned Red!

Hi All,

After index got created in elasticsearch few hours after health of that index turned Red !!!
During my Observation few suspected items are:
>item1: Shard Allocation Failed
>item2: org.apache.lucene.index.CorruptIndexException: codec footer mismatch (file truncated?): actual footer=0 vs expected footer=-1
>item3: hostname hostip - WARN - elasticsearch[master][refresh][T#1] - - - [org.elasticsearch.index.IndexService] [master] [logstash-2017.04.26] failed to run task refresh - suppressing re-occurring exceptions unless the exception changes
org.elasticsearch.index.engine.RefreshFailedEngineException: Refresh failed
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:658) ~[elasticsearch-5.1.2.jar:5.1.2]
`
Any ideas why it's happening? Is there a guide for recovering from this index health RED to Yellow/Green?

Above logs

Hi @Abhijit_Paul,

as I've already written on Github this is likely due to a problem during the upgrade from Elasticsearch 2.x. Your index files got corrupted as is indicated by messages like codec footer mismatch (file truncated?): actual footer=0 vs expected footer=-1071082520.

You can see the upgrade instructions in the docs and specifically the upgrade instructions across major versions (but you should follow the complete upgrade instructions).

Daniel

Hi @danielmitterdorfer thanks for your reply. Upgraded Elasticsearch 2.x to 5.x long back, i deployed elk using application descriptor in kubernetes environment, i have two kubernetes stack where both running Elasticsearch 5.1.2, but this Red status only observed only in one of the stack, other stack all fine.

That's why i rolling out this upgrade issue, if that is the case then in both the kubernetes stack same behavior should observed.

OK one thing forgot to mentioned that I am using GlusterFS behind the screen, and it's look like there is an issue in GlusterFS "Elasticsearch get CorruptIndexException errors when running with GlusterFS persistent storage" due to which index heath turn RED, and this bug is fixed in GlusterFS 3.10 https://bugzilla.redhat.com/show_bug.cgi?id=1390050
Now i yet to verify the same with GlusterFS 3.10

Hi @Abhijit_Paul,

thanks for the update. That bug report is indeed an interesting observation. Is there a specific reason why you are running on this file system?

Daniel

As distributed persistence volume

Hi @Abhijit_Paul,

thanks for the update.

I don't know anything about GlusterFS but you should take care using distributed file systems together with Elasticsearch both in terms of performance and data consistency. For example, we also explicitly advise against using NFS.

Daniel

yes...true and GusterFS is used as distributed file system in cloud

I have exactly the same use case and the same issue with you.
I'm also running es cluster in Kubernetes and using glusterfs as storage backend, my glusterfs version is 3.10.1, but have the same issue with you, here's my topic Shard repeat to be UNASSIGNED raised just yesterday...

With GlusterFS version 3.10.0 onward this issue is resolved, found one more issue with combination of GlusterFS & Elasticseach here is the link https://bugzilla.redhat.com/show_bug.cgi?id=1430659

So the answer is don't run es in Kubernetes?

It's seems with Glusterfs 3.10.1 as well this issue is still persist.....
We can use ES with k8s but whether to use along with GlusterFs or not still a mystery

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.