Hello. I just upgraded from ES 0.16.2 to 0.17.4.
After upgrade my tests started to deadlock ALWAYS.
Below is log from jstack:
Found one Java-level deadlock:
"elasticsearch[index]-pool-182-thread-1":
waiting to lock monitor 0x00007ff3cc007150 (object 0x00007ff44e3755b8, a
java.lang.Object),
which is held by "elasticsearch[Scarlet
Beetle]clusterService#updateTask-pool-191-thread-1"
"elasticsearch[Scarlet Beetle]clusterService#updateTask-pool-191-thread-1":
waiting to lock monitor 0x00007ff3cc0070a8 (object 0x00007ff450110fd0, a
java.lang.Object),
which is held by "elasticsearch[index]-pool-182-thread-1"
Java stack information for the threads listed above:
"elasticsearch[index]-pool-182-thread-1":
at
org.elasticsearch.index.mapper.MapperService$InternalFieldMapperListener.fieldMapper(MapperService.java:710)
waiting to lock <0x00007ff44e3755b8> (a java.lang.Object)
at
org.elasticsearch.index.mapper.DocumentMapper.addFieldMapper(DocumentMapper.java:634)
locked <0x00007ff450110fd0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper$3.fieldMapper(ObjectMapper.java:695)
at
org.elasticsearch.index.mapper.core.AbstractFieldMapper.traverse(AbstractFieldMapper.java:323)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:693)
locked <0x00007ff450110ae0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:440)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:289)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:185)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:428)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
"elasticsearch[Scarlet Beetle]clusterService#updateTask-pool-191-thread-1":
at
org.elasticsearch.index.mapper.DocumentMapper.addObjectMapperListener(DocumentMapper.java:670)
waiting to lock <0x00007ff450110fd0> (a java.lang.Object)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:187)
locked <0x00007ff44e3755b8> (a java.lang.Object)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:166)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:382)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:349)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:176)
locked <0x00007ff44dda9ce0> (a java.lang.Object)
at
org.elasticsearch.cluster.service.InternalClusterService$2.run(InternalClusterService.java:254)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Hello. I just upgraded from ES 0.16.2 to 0.17.4.
After upgrade my tests started to deadlock ALWAYS.
Below is log from jstack:
Found one Java-level deadlock:
"elasticsearch[index]-pool-182-thread-1":
waiting to lock monitor 0x00007ff3cc007150 (object 0x00007ff44e3755b8, a
java.lang.Object),
which is held by "elasticsearch[Scarlet
Beetle]clusterService#updateTask-pool-191-thread-1"
"elasticsearch[Scarlet Beetle]clusterService#updateTask-pool-191-thread-1":
waiting to lock monitor 0x00007ff3cc0070a8 (object 0x00007ff450110fd0, a
java.lang.Object),
which is held by "elasticsearch[index]-pool-182-thread-1"
Java stack information for the threads listed above:
"elasticsearch[index]-pool-182-thread-1":
at
org.elasticsearch.index.mapper.MapperService$InternalFieldMapperListener.fieldMapper(MapperService.java:710)
waiting to lock <0x00007ff44e3755b8> (a java.lang.Object)
at
org.elasticsearch.index.mapper.DocumentMapper.addFieldMapper(DocumentMapper.java:634)
locked <0x00007ff450110fd0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper$3.fieldMapper(ObjectMapper.java:695)
at
org.elasticsearch.index.mapper.core.AbstractFieldMapper.traverse(AbstractFieldMapper.java:323)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:693)
locked <0x00007ff450110ae0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:440)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:289)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:185)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:428)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
"elasticsearch[Scarlet Beetle]clusterService#updateTask-pool-191-thread-1":
at
org.elasticsearch.index.mapper.DocumentMapper.addObjectMapperListener(DocumentMapper.java:670)
waiting to lock <0x00007ff450110fd0> (a java.lang.Object)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:187)
locked <0x00007ff44e3755b8> (a java.lang.Object)
at org.elasticsearch.index.mapper.MapperService.add(MapperService.java:166)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:382)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:349)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:176)
locked <0x00007ff44dda9ce0> (a java.lang.Object)
at
org.elasticsearch.cluster.service.InternalClusterService$2.run(InternalClusterService.java:254)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Hello. I just upgraded from ES 0.16.2 to 0.17.4.
After upgrade my tests started to deadlock ALWAYS.
Below is log from jstack:
Found one Java-level deadlock:
"elasticsearch[index]-pool-182-thread-1":
waiting to lock monitor 0x00007ff3cc007150 (object 0x00007ff44e3755b8, a
java.lang.Object),
which is held by "elasticsearch[Scarlet
Beetle]clusterService#updateTask-pool-191-thread-1"
"elasticsearch[Scarlet
Beetle]clusterService#updateTask-pool-191-thread-1":
waiting to lock monitor 0x00007ff3cc0070a8 (object 0x00007ff450110fd0, a
java.lang.Object),
which is held by "elasticsearch[index]-pool-182-thread-1"
Java stack information for the threads listed above:
"elasticsearch[index]-pool-182-thread-1":
at
org.elasticsearch.index.mapper.MapperService$InternalFieldMapperListener.fieldMapper(MapperService.java:710)
waiting to lock <0x00007ff44e3755b8> (a java.lang.Object)
at
org.elasticsearch.index.mapper.DocumentMapper.addFieldMapper(DocumentMapper.java:634)
locked <0x00007ff450110fd0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper$3.fieldMapper(ObjectMapper.java:695)
at
org.elasticsearch.index.mapper.core.AbstractFieldMapper.traverse(AbstractFieldMapper.java:323)
at
org.elasticsearch.index.mapper.object.ObjectMapper.serializeValue(ObjectMapper.java:693)
locked <0x00007ff450110ae0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.object.ObjectMapper.parse(ObjectMapper.java:440)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:566)
at
org.elasticsearch.index.mapper.DocumentMapper.parse(DocumentMapper.java:490)
at
org.elasticsearch.index.shard.service.InternalIndexShard.prepareIndex(InternalIndexShard.java:289)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnPrimary(TransportIndexAction.java:185)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplicationOperationAction.java:428)
at
org.elasticsearch.action.support.replication.TransportShardReplicationOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperationAction.java:341)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
"elasticsearch[Scarlet
Beetle]clusterService#updateTask-pool-191-thread-1":
at
org.elasticsearch.index.mapper.DocumentMapper.addObjectMapperListener(DocumentMapper.java:670)
waiting to lock <0x00007ff450110fd0> (a java.lang.Object)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:187)
locked <0x00007ff44e3755b8> (a java.lang.Object)
at
org.elasticsearch.index.mapper.MapperService.add(MapperService.java:166)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.processMapping(IndicesClusterStateService.java:382)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.applyMappings(IndicesClusterStateService.java:349)
at
org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:176)
locked <0x00007ff44dda9ce0> (a java.lang.Object)
at
org.elasticsearch.cluster.service.InternalClusterService$2.run(InternalClusterService.java:254)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:619)
Can you open an issue for this while I work on a fix. Its been there in
previous versions, its just a matter of timing and order. Small chances for
it to happen, but it needs to be fixed.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.