Nodes spend all time in RamUsageEstimator after upgrade to 0.90.11

Hi all,

After an upgrade from 0.90.3 to 0.90.11 I see my nodes spending lots of
time doing RamUsageEstimator and I worry about the stability of my cluster
...

Beginning of the hot_thread response:

101.9% (509.3ms out of 500ms) cpu usage by thread
'elasticsearch[sissor1][management][T#1]'
10/10 snapshots sharing following 19 elements

org.apache.lucene.util.RamUsageEstimator$IdentityHashSet.expandAndRehash(RamUsageEstimator.java:747)

org.apache.lucene.util.RamUsageEstimator$IdentityHashSet.add(RamUsageEstimator.java:678)

org.apache.lucene.util.RamUsageEstimator.measureObjectSize(RamUsageEstimator.java:437)

org.apache.lucene.util.RamUsageEstimator.sizeOf(RamUsageEstimator.java:350)

org.apache.lucene.codecs.lucene3x.Lucene3xFields.ramBytesUsed(Lucene3xFields.java:1080)

org.apache.lucene.index.SegmentCoreReaders.ramBytesUsed(SegmentCoreReaders.java:195)

org.apache.lucene.index.SegmentReader.ramBytesUsed(SegmentReader.java:558)

org.elasticsearch.index.engine.robin.RobinEngine.getReaderRamBytesUsed(RobinEngine.java:1180)
org.elasticsearch.index.engine.robin.RobinEng0 0 21052 0
0:00:01 0:00:01 --:--:-- 21092
ine.segmentsStats(RobinEngine.java:1192)

org.elasticsearch.index.shard.service.InternalIndexShard.segmentStats(InternalIndexShard.java:514)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:154)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:211)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:724)

101.7% (508.3ms out of 500ms) cpu usage by thread
'elasticsearch[sissor1][management][T#2]'
10/10 snapshots sharing following 18 elements
java.lang.reflect.Array.get(Native Method)

org.apache.lucene.util.RamUsageEstimator.measureObjectSize(RamUsageEstimator.java:456)

org.apache.lucene.util.RamUsageEstimator.sizeOf(RamUsageEstimator.java:350)

org.apache.lucene.codecs.lucene3x.Lucene3xFields.ramBytesUsed(Lucene3xFields.java:1080)

org.apache.lucene.index.SegmentCoreReaders.ramBytesUsed(SegmentCoreReaders.java:195)

org.apache.lucene.index.SegmentReader.ramBytesUsed(SegmentReader.java:558)

org.elasticsearch.index.engine.robin.RobinEngine.getReaderRamBytesUsed(RobinEngine.java:1180)

org.elasticsearch.index.engine.robin.RobinEngine.segmentsStats(RobinEngine.java:1192)

org.elasticsearch.index.shard.service.InternalIndexShard.segmentStats(InternalIndexShard.java:514)

org.elasticsearch.action.admin.indices.stats.CommonStats.(CommonStats.java:154)

org.elasticsearch.indices.InternalIndicesService.stats(InternalIndicesService.java:211)

org.elasticsearch.node.service.NodeService.stats(NodeService.java:165)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:100)

org.elasticsearch.action.admin.cluster.node.stats.TransportNodesStatsAction.nodeOperation(TransportNodesStatsAction.java:43)

org.elasticsearch.action.support.nodes.TransportNodesOperationAction$AsyncAction$2.run(TransportNodesOperationAction.java:146)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
java.lang.Thread.run(Thread.java:724)

I've seen this message
https://groups.google.com/forum/#!searchin/elasticsearch/RamUsageEstimator/elasticsearch/7mrDhqe6LEo/Vgv4UtvvvEIJ
But there is no conclusion, can I do somethings ?

Regards.

Benoît

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d971709f-abdf-4845-8662-01183c5883ec%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

This is a known issue and will be fixed shortly. For now, what you can do
is run _optimize on all your indexes and set max_num_segments to 1, like
below. Note that this may take a while depending on the size of your
indexes.

http://localhost:9200/_optimize?max_num_segments

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6d794291-eca6-46cb-93e8-d45a513990d3%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I forgot to say that one consequence is that the 'head' plugin interface
remain empty.

The following request timeout :

  • _status
  • stats?all=true
  • _nodes

How to have some information on the cluster in this conditions ?

Benoît

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/c1511903-f9c8-4332-8451-ed10aaa0fcad%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Thank you Binh Ly,

On Tuesday, February 25, 2014 4:25:59 PM UTC+1, Binh Ly wrote:

This is a known issue and will be fixed shortly. For now, what you can do
is run _optimize on all your indexes and set max_num_segments to 1, like
below. Note that this may take a while depending on the size of your
indexes.

http://localhost:9200/_optimize?max_num_segments

Your suggestion confirm what Jörg Prante said here
https://groups.google.com/d/msg/elasticsearch/7mrDhqe6LEo/3gjOJka85OYJ
This is a problem with Lucene segment of version 3.x

I have around 1T of index, so I'm not really happy to run optimize, I will
try on one of the smallest index.

If I stop all the request to the statistics API, I should see the load
decreasing ?

Regards.

Benoît

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/99b02139-3d02-434d-a3e4-724b876c3a27%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The release of last version 0.90.12 solve (or at least hide) this problem.

Thanks to the team !

Benoît

On Tuesday, February 25, 2014 5:50:47 PM UTC+1, Benoît wrote:

Thank you Binh Ly,

On Tuesday, February 25, 2014 4:25:59 PM UTC+1, Binh Ly wrote:

This is a known issue and will be fixed shortly. For now, what you can do
is run _optimize on all your indexes and set max_num_segments to 1, like
below. Note that this may take a while depending on the size of your
indexes.

http://localhost:9200/_optimize?max_num_segments

Your suggestion confirm what Jörg Prante said here
https://groups.google.com/d/msg/elasticsearch/7mrDhqe6LEo/3gjOJka85OYJ
This is a problem with Lucene segment of version 3.x

I have around 1T of index, so I'm not really happy to run optimize, I will
try on one of the smallest index.

If I stop all the request to the statistics API, I should see the load
decreasing ?

Regards.

Benoît

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/caf26d0b-73b0-436d-8253-b0298ec0b285%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.