Following are the jstacks:
ES Node1 - CPU Usage 100-200%: https://gist.github.com/3216175
ES Node2 (offending node) - CPU Usage
ES Node3 - CPU Usage 100-200%: https://gist.github.com/3216200
ES Version: 0.19.8
OS: Ubuntu 10.04.4 LTS
JVM Versions: 20.0-b12 and 19.0-b09
All nodes are physical machines with 24 GB RAM, 8-core CPU.
Another observation - Bigdesk shows that there are no GET requests coming
to node1, which is kind of weird since HAProxy balances all requests in
The problem is not just CPU usage of node2 but also heap memory usage.
Because excessive and fast heap memory usage, GC are so often that node2
heavily skews our search performance.
Following are the heap graphs:
On Tuesday, July 31, 2012 7:44:37 AM UTC+2, kimchy wrote:
Can you jstack another node, lets see if its doing any work as well. Which
ES version are you using? Also, JVM version, OS version, and are you
running on a virtual env or not?
On Jul 31, 2012, at 1:47 AM, Nitish Sharma email@example.com
We are using Tire Ruby client. The ES cluster is behind HAProxy. Thus, all
search, get, and update requests are (almost) equally distributed across
On Monday, July 30, 2012 4:39:29 PM UTC+2, Stéphane R. wrote:
What kind of clients are you using ? Do they balance their queries
between the five nodes or do they always query the same ? If they do
so, it may explain this kind of behavior.
2012/7/30 Nitish Sharma firstname.lastname@example.org:
I checked the stats and elasticsearch-head also confirmed that each
equal number of shards. Moreover, interestingly, this weekend this
(of constant high CPU usage) was taken over by another node and the
previously over-using CPU is now more or less normal. So, as far as I
observed it, at any given point of time (atleast) 1 node would be doing
lot of pure-CPU, while other nodes are fairly quiet. Weird!
We are not indexing documents with routing, neither updating them using
Any other pointers?
On Saturday, July 28, 2012 12:47:34 AM UTC+2, Igor Motov wrote:
Interesting. Did you try to run curl
"localhost:9200/_nodes/stats?pretty=true" to make sure that uniformal
distribution of indexing operations is really the case?
On Friday, July 27, 2012 6:15:29 PM UTC-4, Nitish Sharma wrote:
We are, indeed, running a lot of "update" operations continuously but
they are not routed to specific shards. The document to be updated
present on any of the shards (on any of the nodes). And, as I
shards are uniformly distributed across nodes.
On Friday, July 27, 2012 10:12:56 PM UTC+2, Igor Motov wrote:
It looks like this node is quite busy updating documents. Is it
that your indexing load is concentrated on the shards that just
be located on this particular node?
On Friday, July 27, 2012 3:58:46 PM UTC-4, Nitish Sharma wrote:
I couldnt make any sense out of the jstack's dump (2000 lines
May be you can help - http://pastebin.com/u57QB7ra?
On Friday, July 27, 2012 6:04:18 PM UTC+2, Igor Motov wrote:
Run jstack on the node that is using 600-700% of CPU and let's see
what it's doing.
On Friday, July 27, 2012 9:45:27 AM UTC-4, Nitish Sharma wrote:
We have a 5-node ES cluster. On one particular node ES process is
consuming 600-700% CPU (8 cores) all the time. While other nodes'
is always below 100%. We are running 0.19.8 and each node has