Failed engine - MapperParsingException

So it seems that we have somehow got some invalid events into our index.
The effect is that two shards are always in a state of initializing, never
completing.

In the Elasticsearch logs I see entries like this:

[2015-01-22 12:09:01,124][WARN ][index.engine.internal ] [es113-es1]

[logstash-2015.01.22][2] failed engine [indices:data/write/bulk[s] failed
on replica]
org.elasticsearch.index.mapper.MapperParsingException: object mapping for
[syslog] tried to parse as object, but got EOF, has a concrete value been
provided to it?
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:498)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:541)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareCreate(InternalIndexShard.java:392)
at
org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnReplica(TransportShardBulkAction.java:592)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:246)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$ReplicaOperationTransportHandler.messageReceived(TransportShardReplicationOperationAction.java:225)
at
org.elasticsearch.transport.netty.MessageChannelHandler$RequestHandler.run(MessageChannelHandler.java:275)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

and this

[2015-01-22 12:23:09,668][WARN ][cluster.action.shard ] [es113-es2]

[logstash-2015.01.22][14] sending failed shard for
[logstash-2015.01.22][14], node[TnwHn4RRQAyJ69yUSy8fxA], [R], s[INIT
IALIZING], indexUUID [RXoQmyV6Sa2hz2v2DulC5g], reason [Failed to start
shard, message [CorruptIndexException[[logstash-2015.01.22][14] Preexisting
corrupted index [corrupted_KMA-6aQDTN2ieXMZ6
VXv3g] caused by: CorruptIndexException[codec footer mismatch: actual
footer=-2147285750 vs expected footer=-1071082520 (resource:
NIOFSIndexInput(path="/DATA2/es-data/RCA-ES/nodes/0/indices/
logstash-2015.01.22/14/index/_kk4.fdt"))]
org.apache.lucene.index.CorruptIndexException: codec footer mismatch:
actual footer=-2147285750 vs expected footer=-1071082520 (resource:
NIOFSIndexInput(path="/DATA2/es-data/RCA-ES/nodes/0/i
ndices/logstash-2015.01.22/14/index/_kk4.fdt"))

But I cannot see the actual event which caused the problem, and even worse,
cannot remove the event from the index to allow the cluster to recover to
green...

Does anybody have any ideas how to fix this problem?

Cheers!
-Robin-

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/17372160-38fe-4d82-a413-129ac24b5b2e%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.