High CPU on random idle node

Hi,

We're deploying ES for logstash and I recently setup the cluster to migrate
older indexes to "slower" nodes for longer term storage. The nodes are not
being queried or doing any indexing but CPU is at 75% constantly. Here is
output from hot_threads , jstack and some settings

$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-4ubuntu1~0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

hot threads: https://gist.github.com/jlintz/3d965940284f1a7acf1e
settings: https://gist.github.com/jlintz/c701496a3db26ff0a20e
jstack: https://gist.github.com/jlintz/35924149197850e52931

I just realized while posting this my index buffers are set too high for
these nodes since they arent doing indexing, but just in case that's not
the issue, I'll post anyway. I'll report back if issue is still present

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6b987879-3997-4c5f-9753-14cab0dae545%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Looks like you have a monitoring tool running and it got stuck in the node
stats call while traversing a number of shards/segments.

How many shards/segments are in your migration? It seems to be very active.

Maybe the bloom filter format conversion is expensive but I am not sure.

Jörg

On Sat, Sep 13, 2014 at 12:10 AM, Justin Lintz jlintz@gmail.com wrote:

Hi,

We're deploying ES for logstash and I recently setup the cluster to
migrate older indexes to "slower" nodes for longer term storage. The nodes
are not being queried or doing any indexing but CPU is at 75% constantly.
Here is output from hot_threads , jstack and some settings

$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1) (7u65-2.5.1-4ubuntu1~0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

hot threads: hot_threads · GitHub
settings: ES settings · GitHub
jstack: jstack · GitHub

I just realized while posting this my index buffers are set too high for
these nodes since they arent doing indexing, but just in case that's not
the issue, I'll post anyway. I'll report back if issue is still present

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6b987879-3997-4c5f-9753-14cab0dae545%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6b987879-3997-4c5f-9753-14cab0dae545%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAKdsXoGCJvGZU-tjQaCxC57MgXcy63TR7mDE1reHwAKzyRfoMg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

Thanks, it ended up being the Ganglia plugin that was causing crazy CPU
consumption. I've disabled it since we'll end up buying Marvel once we've
fully deployed.

On Friday, September 12, 2014 6:36:27 PM UTC-4, Jörg Prante wrote:

Looks like you have a monitoring tool running and it got stuck in the node
stats call while traversing a number of shards/segments.

How many shards/segments are in your migration? It seems to be very active.

Maybe the bloom filter format conversion is expensive but I am not sure.

Jörg

On Sat, Sep 13, 2014 at 12:10 AM, Justin Lintz <jli...@gmail.com
<javascript:>> wrote:

Hi,

We're deploying ES for logstash and I recently setup the cluster to
migrate older indexes to "slower" nodes for longer term storage. The nodes
are not being queried or doing any indexing but CPU is at 75% constantly.
Here is output from hot_threads , jstack and some settings

$ java -version
java version "1.7.0_65"
OpenJDK Runtime Environment (IcedTea 2.5.1)
(7u65-2.5.1-4ubuntu1~0.12.04.2)
OpenJDK 64-Bit Server VM (build 24.65-b04, mixed mode)

hot threads: hot_threads · GitHub
settings: ES settings · GitHub
jstack: jstack · GitHub

I just realized while posting this my index buffers are set too high for
these nodes since they arent doing indexing, but just in case that's not
the issue, I'll post anyway. I'll report back if issue is still present

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6b987879-3997-4c5f-9753-14cab0dae545%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6b987879-3997-4c5f-9753-14cab0dae545%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/463d5e56-57da-4680-ab39-65dc484daa11%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.