Hi folks,
I had to reboot the box where I have ES and Graylog. I did not stop ES and
Graylog before doing so. After the reboot, I saw the ES logs, it had the
following message:
[2013-10-03 22:47:26,921][INFO ][index.gateway.local ] [Maestro]
[loggy_252][1] ignoring recovery of a corrupt translog entry
org.elasticsearch.index.mapper.MapperParsingException: Failed to parse
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:509)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:430)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:318)
at
org.elasticsearch.index.shard.service.InternalIndexShard.performRecoveryOperation(InternalIndexShard.java:592)
at
org.elasticsearch.index.gateway.local.LocalIndexShardGateway.recover(LocalIndexShardGateway.java:213)
at
org.elasticsearch.index.gateway.IndexShardGatewayService$1.run(IndexShardGatewayService.java:177)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:636)
Caused by: org.elasticsearch.common.compress.lzf.LZFException: Corrupt
data: overrun in decompress, input offset 464, output offset 619
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:120)
at
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(UnsafeChunkDecoder.java:64)
at
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(LZFCompressedStreamInput.java:57)
at
org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer(CompressedStreamInput.java:168)
at
org.elasticsearch.common.compress.CompressedStreamInput.read(CompressedStreamInput.java:81)
at
org.elasticsearch.common.xcontent.XContentFactory.xContentType(XContentFactory.java:167)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:64)
at
org.elasticsearch.common.xcontent.XContentHelper.createParser(XContentHelper.java:46)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:444)
... 8 more
Also, there was a file written by JVM in the bin folder of ES :
A fatal error has been detected by the Java Runtime Environment:
.
;
;# Problematic frame:
J
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V.
.
.
--------------- T H R E A D ---------------
Current thread (0x00002aaab871b000): JavaThread
"elasticsearch[Maestro][generic][T#5]" daemon [_thread_in_Java, id=4580,
stack(0x0000000041d9d000,0x0000000041dde000)]
.
.
.
Register to memory mapping:
RAX=0x0000000000000023
0x0000000000000023 is pointing to unknown location.
.
.
and so on.
Now, when I start ES and Graylog, I get the dreaded "It seems like you have
no active Graylog2 node running". Nothing in the logs of both - I tried
deleting the recent index, deflector, even the old index but to no avail. I
also made a new ES cluster still got the same problem.
Thanks in advance!
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.