Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the elastic search log:
the jvm crashes after this point. Here is excerpt from the crash report. I
am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk([BI[BII)V+26
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(Ljava/io/InputStream;[B[B)I+104
j
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(Ljava/io/InputStream;[B)I+10
j org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer()Z+32
j org.elasticsearch.common.compress.CompressedStreamInput.read()I+1
j
org.elasticsearch.common.xcontent.XContentFactory.xContentType(Ljava/io/InputStream;)Lorg/elasticsearch/common/xcontent/XContentType;+1
j
org.elasticsearch.common.xcontent.XContentHelper.createParser([BII)Lorg/elasticsearch/common/xcontent/XContentParser;+32
j
org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.pre019Upgrade()V+238
I wonder if its a problem with LZF and its usage of unsafe on solaris (it
smells like it). In 0.19.8, there isn't an option to configure LZF to not
use unsafe, but I just added suport for it in 0.19 branch. The settings is
compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash report. I
am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk([BI[BII)V+26
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(Ljava/io/InputStream;[B[B)I+104
j
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(Ljava/io/InputStream;[B)I+10
j
org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer()Z+32
j org.elasticsearch.common.compress.CompressedStreamInput.read()I+1
j
org.elasticsearch.common.xcontent.XContentFactory.xContentType(Ljava/io/InputStream;)Lorg/elasticsearch/common/xcontent/XContentType;+1
j
org.elasticsearch.common.xcontent.XContentHelper.createParser([BII)Lorg/elasticsearch/common/xcontent/XContentParser;+32
j
org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.pre019Upgrade()V+238
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris (it
smells like it). In 0.19.8, there isn't an option to configure LZF to not
use unsafe, but I just added suport for it in 0.19 branch. The settings is
compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash report.
I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk([BI[BII)V+26
j
org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecoder.decodeChunk(Ljava/io/InputStream;[B[B)I+104
j
org.elasticsearch.common.compress.lzf.LZFCompressedStreamInput.uncompress(Ljava/io/InputStream;[B)I+10
j
org.elasticsearch.common.compress.CompressedStreamInput.readyBuffer()Z+32
j org.elasticsearch.common.compress.CompressedStreamInput.read()I+1
j
org.elasticsearch.common.xcontent.XContentFactory.xContentType(Ljava/io/InputStream;)Lorg/elasticsearch/common/xcontent/XContentType;+1
j
org.elasticsearch.common.xcontent.XContentHelper.createParser([BII)Lorg/elasticsearch/common/xcontent/XContentParser;+32
j
org.elasticsearch.gateway.local.state.meta.LocalGatewayMetaState.pre019Upgrade()V+238
One more thing, can you zip the [data location]/nodes/0/_state directory
and mail it to me? I'd like to see if I can recreate the failure to
decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris (it
smells like it). In 0.19.8, there isn't an option to configure LZF to not
use unsafe, but I just added suport for it in 0.19 branch. The settings is
compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash report.
I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(**Ljava/lang/Object;J)J+-**1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.
UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.**compress.lzf.impl.UnsafeChunkDecoder.
decodeChunk([BI[BII)V+26
j org.elasticsearch.common.**compress.lzf.impl.UnsafeChunkDecoder.
decodeChunk(Ljava/io/**InputStream;[B[B)I+104
j org.elasticsearch.common.**compress.lzf.LZFCompressedStreamInput.
uncompress(Ljava/io/**InputStream;[B)I+10
j org.elasticsearch.common.**compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.**compress.*CompressedStreamInput.read()I+
*1
j org.elasticsearch.common.xcontent.XContentFactory.
xContentType(Ljava/io/InputStream;)Lorg/
elasticsearch/common/xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.
createParser([BII)Lorg/elasticsearch/common/xcontent/
XContentParser;+32
j org.elasticsearch.gateway.**local.state.meta.**LocalGatewayMetaState.
**pre019Upgrade()V+238
I have emailed you the state files. I will try running the 19.9-snapshot
with the property set to safe and will let you know how it goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state directory
and mail it to me? I'd like to see if I can recreate the failure to
decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris
(it smells like it). In 0.19.8, there isn't an option to configure LZF to
not use unsafe, but I just added suport for it in 0.19 branch. The settings
is compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(**Ljava/lang/Object;J)J+-**1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.
UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.**compress.lzf.impl.UnsafeChunkDecoder.
decodeChunk([BI[BII)V+26
j org.elasticsearch.common.**compress.lzf.impl.UnsafeChunkDecoder.
decodeChunk(Ljava/io/**InputStream;[B[B)I+104
j org.elasticsearch.common.**compress.lzf.*LZFCompressedStreamInput.
*uncompress(Ljava/io/**InputStream;[B)I+10
j org.elasticsearch.common.**compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.**compress.**CompressedStreamInput.read()I+
**1
j org.elasticsearch.common.xcontent.XContentFactory.
xContentType(Ljava/io/InputStream;)Lorg/
elasticsearch/common/xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.
createParser([BII)Lorg/elasticsearch/common/xcontent/
XContentParser;+32
j org.elasticsearch.gateway.local.state.meta.
LocalGatewayMetaState.**pre019Upgrade()V+238
I tried running the snapshot version of 19.9 on the solaris machine. The
node did come up and the JVM did not crash, but I am still getting the
following exception. It may be because snappy does not have binaries for
solaris? Is this a fatal error, or will Elasticsearch continue to function
normally inspite of this? I have not set any properties related to
compression (other than setting "compress.lzf.decoder" to safe), so I
suppose compression is turned off by default?
Here are the log:
[2012-07-06 15:30:12,075][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initializing ...
[2012-07-06 15:30:12,107][INFO ][plugins ] [Purple Girl]
loaded , sites
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.(Snappy.java:44)
at
org.elasticsearch.common.compress.snappy.xerial.XerialSnappy.(XerialSnappy.java:35)
at
org.elasticsearch.common.compress.CompressorFactory.(CompressorFactory.java:54)
at
org.elasticsearch.node.internal.InternalNode.(InternalNode.java:121)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:67)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:200)
at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 14 more
[2012-07-06 15:30:20,375][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initialized
[2012-07-06 15:30:20,376][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: starting ...
[2012-07-06 15:30:20,878][INFO ][transport ] [Purple Girl]
bound_address {inet[/0.0.0.0:9303]}, publish_address
{inet[/192.168.166.221:9303]}
[2012-07-06 15:30:24,837][INFO ][cluster.service ] [Purple Girl]
detected_master
[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]], added
{[Fagin][BcdSVawLTzuMRfms9einmg][inet[/192.168.166.221:9300]],[Whizzer][BX1sImVaSUSUMtGzNMUYLA][inet[/192.168.166.221:9302]],[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]],},
reason: zen-disco-receive(from master
[[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]]])
[2012-07-06 15:30:25,470][INFO ][discovery ] [Purple Girl]
elasticsearch/2Aml04qZR8OoFP4p8FBKiA
[2012-07-06 15:30:25,561][INFO ][http ] [Purple Girl]
bound_address {inet[/0.0.0.0:9203]}, publish_address
{inet[/192.168.166.221:9203]}
[2012-07-06 15:30:25,562][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: started
On Friday, July 6, 2012 3:28:58 PM UTC-4, Shantanu wrote:
I have emailed you the state files. I will try running the 19.9-snapshot
with the property set to safe and will let you know how it goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state directory
and mail it to me? I'd like to see if I can recreate the failure to
decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris
(it smells like it). In 0.19.8, there isn't an option to configure LZF to
not use unsafe, but I just added suport for it in 0.19 branch. The settings
is compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(**Ljava/lang/Object;J)J+-**1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.
UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.**compress.lzf.impl.*UnsafeChunkDecoder.
*decodeChunk([BI[BII)V+26
j org.elasticsearch.common.**compress.lzf.impl.*UnsafeChunkDecoder.
*decodeChunk(Ljava/io/**InputStream;[B[B)I+104
j org.elasticsearch.common.**compress.lzf.**LZFCompressedStreamInput.
**uncompress(Ljava/io/**InputStream;[B)I+10
j org.elasticsearch.common.**compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.compress.
CompressedStreamInput.read()I+**1
j org.elasticsearch.common.xcontent.XContentFactory.
xContentType(Ljava/io/InputStream;)Lorg/
elasticsearch/common/xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.
createParser([BII)Lorg/elasticsearch/common/xcontent/
XContentParser;+32
j org.elasticsearch.gateway.local.state.meta.
LocalGatewayMetaState.**pre019Upgrade()V+238
The UnsatisfiedLinkError is not caught, but it should.
Jörg
On Friday, July 6, 2012 9:43:30 PM UTC+2, Shantanu wrote:
I tried running the snapshot version of 19.9 on the solaris machine. The
node did come up and the JVM did not crash, but I am still getting the
following exception. It may be because snappy does not have binaries for
solaris? Is this a fatal error, or will Elasticsearch continue to function
normally inspite of this? I have not set any properties related to
compression (other than setting "compress.lzf.decoder" to safe), so I
suppose compression is turned off by default?
Here are the log:
[2012-07-06 15:30:12,075][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initializing ...
[2012-07-06 15:30:12,107][INFO ][plugins ] [Purple Girl]
loaded , sites
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.(Snappy.java:44)
at
org.elasticsearch.common.compress.snappy.xerial.XerialSnappy.(XerialSnappy.java:35)
at
org.elasticsearch.common.compress.CompressorFactory.(CompressorFactory.java:54)
at
org.elasticsearch.node.internal.InternalNode.(InternalNode.java:121)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:67)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:200)
at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 14 more
[2012-07-06 15:30:20,375][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initialized
[2012-07-06 15:30:20,376][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: starting ...
[2012-07-06 15:30:20,878][INFO ][transport ] [Purple Girl]
bound_address {inet[/0.0.0.0:9303]}, publish_address {inet[/
192.168.166.221:9303]}
[2012-07-06 15:30:24,837][INFO ][cluster.service ] [Purple Girl]
detected_master
[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]], added
{[Fagin][BcdSVawLTzuMRfms9einmg][inet[/192.168.166.221:9300]],[Whizzer][BX1sImVaSUSUMtGzNMUYLA][inet[/192.168.166.221:9302]],[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]],},
reason: zen-disco-receive(from master
[[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]]])
[2012-07-06 15:30:25,470][INFO ][discovery ] [Purple Girl]
elasticsearch/2Aml04qZR8OoFP4p8FBKiA
[2012-07-06 15:30:25,561][INFO ][http ] [Purple Girl]
bound_address {inet[/0.0.0.0:9203]}, publish_address {inet[/
192.168.166.221:9203]}
[2012-07-06 15:30:25,562][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: started
On Friday, July 6, 2012 3:28:58 PM UTC-4, Shantanu wrote:
I have emailed you the state files. I will try running the 19.9-snapshot
with the property set to safe and will let you know how it goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state directory
and mail it to me? I'd like to see if I can recreate the failure to
decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris
(it smells like it). In 0.19.8, there isn't an option to configure LZF to
not use unsafe, but I just added suport for it in 0.19 branch. The settings
is compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(**Ljava/lang/Object;J)J+-**1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.
UnsafeChunkDecoder.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.**compress.lzf.impl.**UnsafeChunkDecoder.
**decodeChunk([BI[BII)V+26
j org.elasticsearch.common.**compress.lzf.impl.**UnsafeChunkDecoder.
**decodeChunk(Ljava/io/**InputStream;[B[B)I+104
j org.elasticsearch.common.compress.lzf.
LZFCompressedStreamInput.**uncompress(Ljava/io/**InputStream;[B)I+10
j org.elasticsearch.common.**compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.compress.
CompressedStreamInput.read()I+**1
j org.elasticsearch.common.xcontent.XContentFactory.
xContentType(Ljava/io/InputStream;)Lorg/
elasticsearch/common/xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.
createParser([BII)Lorg/elasticsearch/common/xcontent/
XContentParser;+32
j org.elasticsearch.gateway.local.state.meta.
LocalGatewayMetaState.**pre019Upgrade()V+238
Did oyu set the mentioned flag? It seems like it loaded fine. I will check
the state files, the failure you see is from snappy failing to load (which
is fine, its not required). I will try and hack around it.
I tried running the snapshot version of 19.9 on the solaris machine. The
node did come up and the JVM did not crash, but I am still getting the
following exception. It may be because snappy does not have binaries for
solaris? Is this a fatal error, or will Elasticsearch continue to function
normally inspite of this? I have not set any properties related to
compression (other than setting "compress.lzf.decoder" to safe), so I
suppose compression is turned off by default?
Here are the log:
[2012-07-06 15:30:12,075][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initializing ...
[2012-07-06 15:30:12,107][INFO ][plugins ] [Purple Girl]
loaded , sites
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.(Snappy.java:44)
at
org.elasticsearch.common.compress.snappy.xerial.XerialSnappy.(XerialSnappy.java:35)
at
org.elasticsearch.common.compress.CompressorFactory.(CompressorFactory.java:54)
at
org.elasticsearch.node.internal.InternalNode.(InternalNode.java:121)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:67)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:200)
at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 14 more
[2012-07-06 15:30:20,375][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initialized
[2012-07-06 15:30:20,376][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: starting ...
[2012-07-06 15:30:20,878][INFO ][transport ] [Purple Girl]
bound_address {inet[/0.0.0.0:9303]}, publish_address {inet[/
192.168.166.221:9303]}
[2012-07-06 15:30:24,837][INFO ][cluster.service ] [Purple Girl]
detected_master [Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]],
added {[Fagin][BcdSVawLTzuMRfms9einmg][inet[/192.168.166.221:9300
]],[Whizzer][BX1sImVaSUSUMtGzNMUYLA][inet[/192.168.166.221:9302
]],[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]],},
reason: zen-disco-receive(from master
[[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]]])
[2012-07-06 15:30:25,470][INFO ][discovery ] [Purple Girl]
elasticsearch/2Aml04qZR8OoFP4p8FBKiA
[2012-07-06 15:30:25,561][INFO ][http ] [Purple Girl]
bound_address {inet[/0.0.0.0:9203]}, publish_address {inet[/
192.168.166.221:9203]}
[2012-07-06 15:30:25,562][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: started
On Friday, July 6, 2012 3:28:58 PM UTC-4, Shantanu wrote:
I have emailed you the state files. I will try running the 19.9-snapshot
with the property set to safe and will let you know how it goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state directory
and mail it to me? I'd like to see if I can recreate the failure to
decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris
(it smells like it). In 0.19.8, there isn't an option to configure LZF to
not use unsafe, but I just added suport for it in 0.19 branch. The settings
is compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I am
using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-*1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.UnsafeChunkDecod
*er.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.compress.lzf.impl.*UnsafeChunkDecod
*er.**decodeChunk([BI[BII)V+26
j org.elasticsearch.common.compress.lzf.impl.*UnsafeChunkDecod
*er.**decodeChunk(Ljava/io/InputStream;[B[B)I+104
j org.elasticsearch.common.compress.lzf.*LZFCompressedStreamIn
*put.uncompress(Ljava/io/InputStream;[B)I+10
j org.elasticsearch.common.compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.compress.CompressedStreamInput.
read()I+**1
j org.elasticsearch.common.xcontent.XContentFactory.*xContentT
*ype(Ljava/io/**InputStream;)**Lorg/elasticsearch/common/
xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.*createPars
*er([BII)Lorg/**elasticsearch/**common/xcontent/XContentParser
;+32
j org.elasticsearch.gateway.local.state.meta.*LocalGatewayMeta
*State.**pre019Upgrade()V+238
As I mentioned the node did come up after I set the afore mentioned flag. I
was just curious about the snappy exception I saw in the logs when the node
was starting up. Now that you have confirmed that its not fatal, we would
go ahead and upgrade to 0.19.9 once the stable version comes out. Thanks a
lot for the help.
On Monday, July 9, 2012 2:12:21 PM UTC-4, kimchy wrote:
Did oyu set the mentioned flag? It seems like it loaded fine. I will check
the state files, the failure you see is from snappy failing to load (which
is fine, its not required). I will try and hack around it.
On Fri, Jul 6, 2012 at 9:43 PM, Shantanu wrote:
I tried running the snapshot version of 19.9 on the solaris machine. The
node did come up and the JVM did not crash, but I am still getting the
following exception. It may be because snappy does not have binaries for
solaris? Is this a fatal error, or will Elasticsearch continue to function
normally inspite of this? I have not set any properties related to
compression (other than setting "compress.lzf.decoder" to safe), so I
suppose compression is turned off by default?
Here are the log:
[2012-07-06 15:30:12,075][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initializing ...
[2012-07-06 15:30:12,107][INFO ][plugins ] [Purple Girl]
loaded , sites
java.lang.reflect.InvocationTargetException
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at
org.xerial.snappy.SnappyLoader.loadNativeLibrary(SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(SnappyLoader.java:219)
at org.xerial.snappy.Snappy.(Snappy.java:44)
at
org.elasticsearch.common.compress.snappy.xerial.XerialSnappy.(XerialSnappy.java:35)
at
org.elasticsearch.common.compress.CompressorFactory.(CompressorFactory.java:54)
at
org.elasticsearch.node.internal.InternalNode.(InternalNode.java:121)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.java:67)
at org.elasticsearch.bootstrap.Bootstrap.main(Bootstrap.java:200)
at
org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:32)
Caused by: java.lang.UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1738)
at java.lang.Runtime.loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(System.java:1028)
at
org.xerial.snappy.SnappyNativeLoader.loadLibrary(SnappyNativeLoader.java:52)
... 14 more
[2012-07-06 15:30:20,375][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: initialized
[2012-07-06 15:30:20,376][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: starting ...
[2012-07-06 15:30:20,878][INFO ][transport ] [Purple Girl]
bound_address {inet[/0.0.0.0:9303]}, publish_address {inet[/
192.168.166.221:9303]}
[2012-07-06 15:30:24,837][INFO ][cluster.service ] [Purple Girl]
detected_master
[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]], added
{[Fagin][BcdSVawLTzuMRfms9einmg][inet[/192.168.166.221:9300]],[Whizzer][BX1sImVaSUSUMtGzNMUYLA][inet[/192.168.166.221:9302]],[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]],},
reason: zen-disco-receive(from master
[[Apalla][QtMIm7p6TamFnm2bS_ZDMg][inet[/192.168.166.221:9301]]])
[2012-07-06 15:30:25,470][INFO ][discovery ] [Purple Girl]
elasticsearch/2Aml04qZR8OoFP4p8FBKiA
[2012-07-06 15:30:25,561][INFO ][http ] [Purple Girl]
bound_address {inet[/0.0.0.0:9203]}, publish_address {inet[/
192.168.166.221:9203]}
[2012-07-06 15:30:25,562][INFO ][node ] [Purple Girl]
{0.19.9-SNAPSHOT}[4865]: started
On Friday, July 6, 2012 3:28:58 PM UTC-4, Shantanu wrote:
I have emailed you the state files. I will try running the 19.9-snapshot
with the property set to safe and will let you know how it goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state
directory and mail it to me? I'd like to see if I can recreate the failure
to decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on solaris
(it smells like it). In 0.19.8, there isn't an option to configure LZF to
not use unsafe, but I just added suport for it in 0.19 branch. The settings
is compress.lzf.decoder, and set it to "safe". If you want, I can provide a
link to a snapshot build of 0.19.9, just tell me which one you need
(zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I
am using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+-**1078249144
j sun.misc.Unsafe.getLong(Ljava/lang/Object;J)J+0
j org.elasticsearch.common.compress.lzf.impl.**UnsafeChunkDecod er.copyUpTo32([BI[BII)V+78
j org.elasticsearch.common.compress.lzf.impl.**UnsafeChunkDecod
**er.**decodeChunk([BI[BII)V+26
j org.elasticsearch.common.compress.lzf.impl.**UnsafeChunkDecod
**er.**decodeChunk(Ljava/io/InputStream;[B[B)I+104
j org.elasticsearch.common.compress.lzf.**LZFCompressedStreamIn
**put.uncompress(Ljava/io/InputStream;[B)I+10
j org.elasticsearch.common.compress.CompressedStreamInput.
readyBuffer()Z+32
j org.elasticsearch.common.compress.CompressedStreamInput.
read()I+**1
j org.elasticsearch.common.xcontent.XContentFactory.**xContentT
**ype(Ljava/io/**InputStream;)**Lorg/elasticsearch/common/
xcontent/**XContentType;+1
j org.elasticsearch.common.xcontent.XContentHelper.**createPars
**er([BII)Lorg/**elasticsearch/**common/xcontent/XContentParser
;+32
j org.elasticsearch.gateway.local.state.meta.**LocalGatewayMeta
**State.**pre019Upgrade()V+238
I managed to hack around it so the exception won't be shown when snappy is
not available, so at least this will go aways when 0.19.9 formal is out
(its already in the 0.19 brach).
As I mentioned the node did come up after I set the afore mentioned flag.
I was just curious about the snappy exception I saw in the logs when the
node was starting up. Now that you have confirmed that its not fatal, we
would go ahead and upgrade to 0.19.9 once the stable version comes out.
Thanks a lot for the help.
On Monday, July 9, 2012 2:12:21 PM UTC-4, kimchy wrote:
Did oyu set the mentioned flag? It seems like it loaded fine. I will
check the state files, the failure you see is from snappy failing to load
(which is fine, its not required). I will try and hack around it.
On Fri, Jul 6, 2012 at 9:43 PM, Shantanu wrote:
I tried running the snapshot version of 19.9 on the solaris machine. The
node did come up and the JVM did not crash, but I am still getting the
following exception. It may be because snappy does not have binaries for
solaris? Is this a fatal error, or will Elasticsearch continue to function
normally inspite of this? I have not set any properties related to
compression (other than setting "compress.lzf.decoder" to safe), so I
suppose compression is turned off by default?
Here are the log:
[2012-07-06 15:30:12,075][INFO ][node ] [Purple
Girl] {0.19.9-SNAPSHOT}[4865]: initializing ...
[2012-07-06 15:30:12,107][INFO ][plugins ] [Purple
Girl] loaded , sites
java.lang.reflect.**InvocationTargetException
at sun.reflect.**NativeMethodAccessorImpl.**invoke0(Native
Method)
at sun.reflect.**NativeMethodAccessorImpl.invoke(
NativeMethodAccessorImpl.java:**39)
at sun.reflect.**DelegatingMethodAccessorImpl.invoke(
DelegatingMethodAccessorImpl.**java:25)
at java.lang.reflect.Method.**invoke(Method.java:597)
at org.xerial.snappy.**SnappyLoader.loadNativeLibrary(
SnappyLoader.java:317)
at org.xerial.snappy.SnappyLoader.load(
SnappyLoader.java:219)
at org.xerial.snappy.Snappy.<**clinit>(Snappy.java:44)
at org.elasticsearch.common.compress.snappy.xerial.
XerialSnappy.(**XerialSnappy.java:35)
at org.elasticsearch.common.compress.CompressorFactory.<
clinit>(CompressorFactory.**java:54)
at org.elasticsearch.node.internal.InternalNode.(
InternalNode.java:121)
at org.elasticsearch.node.NodeBuilder.build(NodeBuilder.
java:159)
at org.elasticsearch.bootstrap.Bootstrap.setup(Bootstrap.
java:67)
at org.elasticsearch.bootstrap.*Bootstrap.main(Bootstrap.java:
*200)
at org.elasticsearch.bootstrap.Elasticsearch.main(
Elasticsearch.java:32)
Caused by: java.lang.**UnsatisfiedLinkError: no snappyjava in
java.library.path
at java.lang.ClassLoader.**loadLibrary(ClassLoader.java:**1738)
at java.lang.Runtime.**loadLibrary0(Runtime.java:823)
at java.lang.System.loadLibrary(**System.java:1028)
at org.xerial.snappy.**SnappyNativeLoader.loadLibrary(
SnappyNativeLoader.java:52)
... 14 more
[2012-07-06 15:30:20,375][INFO ][node ] [Purple
Girl] {0.19.9-SNAPSHOT}[4865]: initialized
[2012-07-06 15:30:20,376][INFO ][node ] [Purple
Girl] {0.19.9-SNAPSHOT}[4865]: starting ...
[2012-07-06 15:30:20,878][INFO ][transport ] [Purple
Girl] bound_address {inet[/0.0.0.0:9303]}, publish_address {inet[/
192.168.166.221:9303]}
[2012-07-06 15:30:24,837][INFO ][cluster.service ] [Purple
Girl] detected_master [Apalla][QtMIm7p6TamFnm2bS_**ZDMg][inet[/
192.168.166.221:9301]], added {[Fagin][
BcdSVawLTzuMRfms9einmg][inet[/**192.168.166.221:9300]],[Whizzer][
BX1sImVaSUSUMtGzNMUYLA][inet[/192.168.166.221:9302]],[
Apalla][QtMIm7p6TamFnm2bS_**ZDMg][inet[/192.168.166.221:9301]],},
reason: zen-disco-receive(from master [[Apalla][QtMIm7p6TamFnm2bS_
ZDMg][inet[/192.168.166.221:**9301]]])
[2012-07-06 15:30:25,470][INFO ][discovery ] [Purple
Girl] elasticsearch/**2Aml04qZR8OoFP4p8FBKiA
[2012-07-06 15:30:25,561][INFO ][http ] [Purple
Girl] bound_address {inet[/0.0.0.0:9203]}, publish_address {inet[/
192.168.166.221:9203]}
[2012-07-06 15:30:25,562][INFO ][node ] [Purple
Girl] {0.19.9-SNAPSHOT}[4865]: started
On Friday, July 6, 2012 3:28:58 PM UTC-4, Shantanu wrote:
I have emailed you the state files. I will try running the
19.9-snapshot with the property set to safe and will let you know how it
goes.
Thank you for all the help.
On Friday, July 6, 2012 4:06:47 AM UTC-4, kimchy wrote:
One more thing, can you zip the [data location]/nodes/0/_state
directory and mail it to me? I'd like to see if I can recreate the failure
to decompress problem.
On Thursday, July 5, 2012 6:18:23 PM UTC-4, kimchy wrote:
I wonder if its a problem with LZF and its usage of unsafe on
solaris (it smells like it). In 0.19.8, there isn't an option to configure
LZF to not use unsafe, but I just added suport for it in 0.19 branch. The
settings is compress.lzf.decoder, and set it to "safe". If you want, I can
provide a link to a snapshot build of 0.19.9, just tell me which one you
need (zip/tag.gz/deb).
On Thu, Jul 5, 2012 at 10:28 PM, Shantanu wrote:
Elastic search 0.19.8 causes JVM to crash on solaris java 6 U 31. I
am using mapper-attachmnents 18.7 and I have not had this problem with the
previous version I was using 0.18.7 and mapper-attachments 18.6.
Here is the Elasticsearch log:
the jvm crashes after this point. Here is excerpt from the crash
report. I am also attaching the complete crash report
V [libjvm.so+0x974c88] Unsafe_GetLong+0x120
j sun.misc.Unsafe.getLong(Ljava**/lang/Object;J)J+-**
1078249144
j sun.misc.Unsafe.getLong(Ljava**/lang/Object;J)J+0
j org.elasticsearch.common.comp****ress.lzf.impl.
UnsafeChunkDecoder.copyUpTo32(**[BI[BII)V+78
j org.elasticsearch.common.**compress.lzf.impl.**
UnsafeChunkDecoder.**decodeChunk([BI[BII)V+26
j org.elasticsearch.common.**compress.lzf.impl.**
UnsafeChunkDecoder.decodeChunk(Ljava/io/InputS
tream;[B[B)I+104
j org.elasticsearch.common.**compress.lzf.**
LZFCompressedStreamInput.uncompress(Ljava/io/InputS
tream;[B)I+10
j org.elasticsearch.common.**compress.**CompressedStreamInput.
**readyBuffer()Z+32
j org.elasticsearch.common.**compress.CompressedStreamInput. read()I+1
j org.elasticsearch.common.xcon****tent.XContentFactory.
xContentType(Ljava/io/InputStream;)Lorg/
elasticsearch/common/xcontent/**XContentType;+1
j org.elasticsearch.common.**xcontent.XContentHelper.
createPars****er([BII)Lorg/elasticsearch/common/xcontent/
XContentParser;+**32
j org.elasticsearch.gateway.loc****al.state.meta.
LocalGatewayMeta****State.**pre019Upgrade()V+238
Just to let you know that I ran into the same exact issue in version 0.20.6 of ES running on an IBM System i system (OS400 operating system). JVM IBM J9 32 bits. Adding the settings option compress.lzf.decoder:safe fixes the problem for me also. Thank you very much,
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.