Corrupt data: overrun in decompress - ES node Problems?


(kaush777) #1

Hi folks,

I had to reboot the box where I have ES and Graylog. I did not stop ES and
Graylog before doing so. After the reboot, I saw the ES logs, it had the
following message:

[2013-10-03 22:47:26,921][INFO ][index.gateway.local ] [Maestro]
[loggy_252][1] ignoring recovery of a corrupt translog entry
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:509)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:430)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:318)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:592)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:213)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.elasticsearch.common.compress.lzf.LZFException: Corrupt
data: overrun in decompress, input offset 464, output offset 619
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:120)
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:64)
at
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(LZFCompressedStreamInput.java:57)
at
org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer(CompressedStreamInput.java:168)
at
org.elasticsearch.common.compress.CompressedStreamInput.read(CompressedStreamInput.java:81)
at
org.elasticsearch.common.xcontent.XContentFactory.xContentType(XContentFactory.java:167)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:64)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:444)
... 8 more

Also, there was a file written by JVM in the bin folder of ES :

A fatal error has been detected by the Java Runtime Environment:

.
;
;# Problematic frame:

J

org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V.
.
.
--------------- T H R E A D ---------------

Current thread (0x00002aaab871b000): JavaThread
"elasticsearch[Maestro][generic][T#5]" daemon [_thread_in_Java, id=4580,
stack(0x0000000041d9d000,0x0000000041dde000)]
.
.
.
Register to memory mapping:

RAX=0x0000000000000023
0x0000000000000023 is pointing to unknown location.
.
.
and so on.

Now, when I start ES and Graylog, I get the dreaded "It seems like you have
no active Graylog2 node running". Nothing in the logs of both - I tried
deleting the recent index, deflector, even the old index but to no avail. I
also made a new ES cluster still got the same problem.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Alexander Reelsen) #2

Hey,

first, the corruption looks strange. Can you put the above created and
partly pasted file somewhere? I would like to see it, if possible.

Also, can you explain what you mean by "I made a new ES cluster still got
the same problem"? What did you create?

The problem - from what I see until now - is that, the translog is
corrupted. The translog is some sort of a journal, which makes sure, that
freshly indexed data is not only written to a memory buffer but also
persisted to disk. So in case you are shutting down unexpectedly, the
translog can be used to replay some data into the memory.

Your exception looks like the translog has been corrupted somehow, and so
it is not possible to start up properly again. You could remove the
corresponding translog from your data directory (each shard has its own
translog directory), and check if the cluster starts up correct again.

Also, can you check your logfiles to see, if anything strange happened
before you shut down the elasticsearch cluster? This behaviour is quite
strange and I would like to be sure that the system operated normally
before.

--Alex

On Fri, Oct 4, 2013 at 11:46 AM, Kaushal kaush.mu@gmail.com wrote:

Hi folks,

I had to reboot the box where I have ES and Graylog. I did not stop ES and
Graylog before doing so. After the reboot, I saw the ES logs, it had the
following message:

[2013-10-03 22:47:26,921][INFO ][index.gateway.local ] [Maestro]
[loggy_252][1] ignoring recovery of a corrupt translog entry
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:509)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:430)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:318)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:592)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:213)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.elasticsearch.common.compress.lzf.LZFException: Corrupt
data: overrun in decompress, input offset 464, output offset 619
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:120)
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:64)
at
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(LZFCompressedStreamInput.java:57)
at
org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer(CompressedStreamInput.java:168)
at
org.elasticsearch.common.compress.CompressedStreamInput.read(CompressedStreamInput.java:81)
at
org.elasticsearch.common.xcontent.XContentFactory.xContentType(XContentFactory.java:167)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:64)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:444)
... 8 more

Also, there was a file written by JVM in the bin folder of ES :

A fatal error has been detected by the Java Runtime Environment:

.
;
;# Problematic frame:

J

org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V.
.
.
--------------- T H R E A D ---------------

Current thread (0x00002aaab871b000): JavaThread
"elasticsearch[Maestro][generic][T#5]" daemon [_thread_in_Java, id=4580,
stack(0x0000000041d9d000,0x0000000041dde000)]
.
.
.
Register to memory mapping:

RAX=0x0000000000000023
0x0000000000000023 is pointing to unknown location.
.
.
and so on.

Now, when I start ES and Graylog, I get the dreaded "It seems like you
have no active Graylog2 node running". Nothing in the logs of both - I
tried deleting the recent index, deflector, even the old index but to no
avail. I also made a new ES cluster still got the same problem.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #3