Identifying hot shards to address uneven load

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has 100%
CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi Daveo,

Are you using the routing parameter in your searches? If not it's highly
unlikely search distribution wouldn't be evenly spread across shards. What
is possible is that a node contains more shards then others.

It feels like the node is doing something else or got stuck on an unlucky
query - to verify - can you please post the result of the hot threads API
(Elasticsearch Platform — Find real-time answers at scale | Elastic
) on this node? this will help figure out what it's doing? Also - can post
the memory usage on that node?

Cheers,
Boaz

On Thursday, September 26, 2013 10:34:37 PM UTC+2, da...@weheartit.com
wrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Here is the results of the hot_threads.

For memory use the system has 64GB RAM, jvm settings are.

-Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-XX:+HeapDumpOnOutOfMemoryError

We are running 0.90.5

Thanks for getting back to me.

On 9/26/13 11:53 PM, Boaz Leskes wrote:

Hi Daveo,

Are you using the routing parameter in your searches? If not it's
highly unlikely search distribution wouldn't be evenly spread across
shards. What is possible is that a node contains more shards then others.

It feels like the node is doing something else or got stuck on an
unlucky query - to verify - can you please post the result of the hot
threads API
(Elasticsearch Platform — Find real-time answers at scale | Elastic
) on this node? this will help figure out what it's doing? Also - can
post the memory usage on that node?

Cheers,
Boaz

On Thursday, September 26, 2013 10:34:37 PM UTC+2, da...@weheartit.com
wrote:

I have read many posts in this group about uneven load and hot
shards.
We are experiencing the same symptoms where one data node out of 8
has 100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per
shard?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hmm, all the threads are busy searching. Are you running any heavy queries?
are you sure the node gets an equal amount of traffic to the rest?

Besides the above can you also share the output of

  • curl -s localhost:9200/_stats?all
  • curl -s localhost:9200/_nodes?all

These call give cluster statistic plus the cluster topology - perhaps we
can see something there.

Cheers,
Boaz

On Fri, Sep 27, 2013 at 7:00 PM, David O'Dell daveo@weheartit.com wrote:

Here is the results of the hot_threads.
hot threads search02 · GitHub

For memory use the system has 64GB RAM, jvm settings are.

-Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-XX:+HeapDumpOnOutOfMemoryError

We are running 0.90.5

Thanks for getting back to me.

On 9/26/13 11:53 PM, Boaz Leskes wrote:

Hi Daveo,

Are you using the routing parameter in your searches? If not it's highly
unlikely search distribution wouldn't be evenly spread across shards. What
is possible is that a node contains more shards then others.

It feels like the node is doing something else or got stuck on an
unlucky query - to verify - can you please post the result of the hot
threads API (
Elasticsearch Platform — Find real-time answers at scale | Elastic) on this node? this will help figure out what it's doing? Also - can post
the memory usage on that node?

Cheers,
Boaz

On Thursday, September 26, 2013 10:34:37 PM UTC+2, da...@weheartit.comwrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

results of stats all

results of nodes all

I've disabled autobalancing and have manually moved shards to try and
even them out but there is always one or two nodes who's cpu usage is
much higher than the rest.
BTW we are using routing with 2 servers (search08/09) running http and
taking all the queries from haproxy.

On 9/27/13 11:42 AM, Boaz Leskes wrote:

Hmm, all the threads are busy searching. Are you running any heavy
queries? are you sure the node gets an equal amount of traffic to the
rest?

Besides the above can you also share the output of

  • curl -s localhost:9200/_stats?all
  • curl -s localhost:9200/_nodes?all

These call give cluster statistic plus the cluster topology - perhaps
we can see something there.

Cheers,
Boaz

On Fri, Sep 27, 2013 at 7:00 PM, David O'Dell <daveo@weheartit.com
mailto:daveo@weheartit.com> wrote:

Here is the results of the hot_threads.
https://gist.github.com/dodizzle/6731643

For memory use the system has 64GB RAM, jvm settings are.

-Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC
-XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError


We are running 0.90.5

Thanks for getting back to me.

On 9/26/13 11:53 PM, Boaz Leskes wrote:
Hi Daveo,

Are you using the routing parameter in your searches? If not it's
highly unlikely search distribution wouldn't be evenly spread
across shards. What is possible is that a node contains more
shards then others.

It feels like the node is doing something else or got stuck on an
unlucky query - to verify - can you please post the result of the
hot threads API
(http://www.elasticsearch.org/guide/reference/api/admin-cluster-nodes-hot-threads/
) on this node? this will help figure out what it's doing? Also -
can post the memory usage on that node?

Cheers,
Boaz


On Thursday, September 26, 2013 10:34:37 PM UTC+2,
da...@weheartit.com <mailto:da...@weheartit.com> wrote:

    I have read many posts in this group about uneven load and
    hot shards.
    We are experiencing the same symptoms where one data node out
    of 8 has 100% CPU usage and the other 7 nodes operate at 40%.

    My question is how do I know identify the volume of searches
    per shard?

-- 
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearch+unsubscribe@googlegroups.com
<mailto:elasticsearch+unsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/groups/opt_out.
-- 
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearch+unsubscribe@googlegroups.com
<mailto:elasticsearch%2Bunsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

About routing: I meant the routing option of the index and search api - as
explained here: Elasticsearch Platform — Find real-time answers at scale | Elastic .

Good to see you can influence the cpu usage by moving shards around - that
means it is related to the content of the shards and not the machine it
self. Should make it easier to trace. Did you notice any correlation
between the presence of a specific shard and high cpu usage?

Can you also get these two (slightly different):

curl -XGET "http://localhost:9200/_nodes/stats?all"

curl -XGET "http://localhost:9200/_cluster/state"

Cheers,
Boaz

On Fri, Sep 27, 2013 at 8:50 PM, David O'Dell daveo@weheartit.com wrote:

results of stats all
stats all · GitHub

results of nodes all
nodes all · GitHub

I've disabled autobalancing and have manually moved shards to try and even
them out but there is always one or two nodes who's cpu usage is much
higher than the rest.
BTW we are using routing with 2 servers (search08/09) running http and
taking all the queries from haproxy.

On 9/27/13 11:42 AM, Boaz Leskes wrote:

Hmm, all the threads are busy searching. Are you running any heavy
queries? are you sure the node gets an equal amount of traffic to the
rest?

Besides the above can you also share the output of

  • curl -s localhost:9200/_stats?all
  • curl -s localhost:9200/_nodes?all

These call give cluster statistic plus the cluster topology - perhaps we
can see something there.

Cheers,
Boaz

On Fri, Sep 27, 2013 at 7:00 PM, David O'Dell daveo@weheartit.com wrote:

Here is the results of the hot_threads.
hot threads search02 · GitHub

For memory use the system has 64GB RAM, jvm settings are.

-Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-XX:+HeapDumpOnOutOfMemoryError

We are running 0.90.5

Thanks for getting back to me.

On 9/26/13 11:53 PM, Boaz Leskes wrote:

Hi Daveo,

Are you using the routing parameter in your searches? If not it's
highly unlikely search distribution wouldn't be evenly spread across
shards. What is possible is that a node contains more shards then others.

It feels like the node is doing something else or got stuck on an
unlucky query - to verify - can you please post the result of the hot
threads API (
Elasticsearch Platform — Find real-time answers at scale | Elastic) on this node? this will help figure out what it's doing? Also - can post
the memory usage on that node?

Cheers,
Boaz

On Thursday, September 26, 2013 10:34:37 PM UTC+2, da...@weheartit.comwrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Boaz just checked on routing and we are not using the routing option.

On 9/27/13 1:16 PM, Boaz Leskes wrote:

Hi David,

About routing: I meant the routing option of the index and search api

Good to see you can influence the cpu usage by moving shards around -
that means it is related to the content of the shards and not the
machine it self. Should make it easier to trace. Did you notice any
correlation between the presence of a specific shard and high cpu usage?

Can you also get these two (slightly different):

curl -XGET "http://localhost:9200/_nodes/stats?all"

curl -XGET "http://localhost:9200/_cluster/state"

Cheers,
Boaz

On Fri, Sep 27, 2013 at 8:50 PM, David O'Dell <daveo@weheartit.com
mailto:daveo@weheartit.com> wrote:

results of stats all
https://gist.github.com/dodizzle/6733355

results of nodes all
https://gist.github.com/dodizzle/6733369

I've disabled autobalancing and have manually moved shards to try
and even them out but there is always one or two nodes who's cpu
usage is much higher than the rest.
BTW we are using routing with 2 servers (search08/09) running http
and taking all the queries from haproxy.



On 9/27/13 11:42 AM, Boaz Leskes wrote:
Hmm, all the threads are busy searching. Are you running any
heavy queries? are you sure the node gets an equal amount of
traffic to the rest?

Besides the above can you also share the output of

* curl -s localhost:9200/_stats?all
* curl -s localhost:9200/_nodes?all

These call give cluster statistic plus the cluster topology -
perhaps we can see something there.

Cheers,
Boaz



On Fri, Sep 27, 2013 at 7:00 PM, David O'Dell
<daveo@weheartit.com <mailto:daveo@weheartit.com>> wrote:

    Here is the results of the hot_threads.
    https://gist.github.com/dodizzle/6731643

    For memory use the system has 64GB RAM, jvm settings are.

    -Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC
    -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75
    -XX:+UseCMSInitiatingOccupancyOnly
    -XX:+HeapDumpOnOutOfMemoryError


    We are running 0.90.5

    Thanks for getting back to me.

    On 9/26/13 11:53 PM, Boaz Leskes wrote:
    Hi Daveo,

    Are you using the routing parameter in your searches? If not
    it's highly unlikely search distribution wouldn't be evenly
    spread across shards. What is possible is that a node
    contains more shards then others.

    It feels like the node is doing something else or got stuck
    on an unlucky query - to verify - can you please post the
    result of the hot threads API
    (http://www.elasticsearch.org/guide/reference/api/admin-cluster-nodes-hot-threads/
    ) on this node? this will help figure out what it's doing?
    Also - can post the memory usage on that node?

    Cheers,
    Boaz


    On Thursday, September 26, 2013 10:34:37 PM UTC+2,
    da...@weheartit.com <mailto:da...@weheartit.com> wrote:

        I have read many posts in this group about uneven load
        and hot shards.
        We are experiencing the same symptoms where one data
        node out of 8 has 100% CPU usage and the other 7 nodes
        operate at 40%.

        My question is how do I know identify the volume of
        searches per shard?

    -- 
    You received this message because you are subscribed to a
    topic in the Google Groups "elasticsearch" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
    To unsubscribe from this group and all its topics, send an
    email to elasticsearch+unsubscribe@googlegroups.com
    <mailto:elasticsearch+unsubscribe@googlegroups.com>.
    For more options, visit
    https://groups.google.com/groups/opt_out.
    -- 
    You received this message because you are subscribed to a
    topic in the Google Groups "elasticsearch" group.
    To unsubscribe from this topic, visit
    https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
    To unsubscribe from this group and all its topics, send an
    email to elasticsearch+unsubscribe@googlegroups.com
    <mailto:elasticsearch%2Bunsubscribe@googlegroups.com>.
    For more options, visit https://groups.google.com/groups/opt_out.


-- 
You received this message because you are subscribed to a topic
in the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearch+unsubscribe@googlegroups.com
<mailto:elasticsearch+unsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/groups/opt_out.
-- 
You received this message because you are subscribed to a topic in
the Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email
to elasticsearch+unsubscribe@googlegroups.com
<mailto:elasticsearch%2Bunsubscribe@googlegroups.com>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

OK. Good to know.

Did you manage to correlate CPU with a specific index? Also, can you post
the output of those two extra APIs ? (indicate what node had cpu load at
the time pls).

Cheers,
Boaz

On Mon, Sep 30, 2013 at 7:51 PM, David O'Dell daveo@weheartit.com wrote:

Boaz just checked on routing and we are not using the routing option.

On 9/27/13 1:16 PM, Boaz Leskes wrote:

Hi David,

About routing: I meant the routing option of the index and search api -
as explained here:
Elasticsearch Platform — Find real-time answers at scale | Elastic .

Good to see you can influence the cpu usage by moving shards around -
that means it is related to the content of the shards and not the machine
it self. Should make it easier to trace. Did you notice any correlation
between the presence of a specific shard and high cpu usage?

Can you also get these two (slightly different):

curl -XGET "http://localhost:9200/_nodes/stats?all"

curl -XGET "http://localhost:9200/_cluster/state"

Cheers,
Boaz

On Fri, Sep 27, 2013 at 8:50 PM, David O'Dell daveo@weheartit.com wrote:

results of stats all
stats all · GitHub

results of nodes all
nodes all · GitHub

I've disabled autobalancing and have manually moved shards to try and
even them out but there is always one or two nodes who's cpu usage is much
higher than the rest.
BTW we are using routing with 2 servers (search08/09) running http and
taking all the queries from haproxy.

On 9/27/13 11:42 AM, Boaz Leskes wrote:

Hmm, all the threads are busy searching. Are you running any heavy
queries? are you sure the node gets an equal amount of traffic to the
rest?

Besides the above can you also share the output of

  • curl -s localhost:9200/_stats?all
  • curl -s localhost:9200/_nodes?all

These call give cluster statistic plus the cluster topology - perhaps
we can see something there.

Cheers,
Boaz

On Fri, Sep 27, 2013 at 7:00 PM, David O'Dell daveo@weheartit.comwrote:

Here is the results of the hot_threads.
hot threads search02 · GitHub

For memory use the system has 64GB RAM, jvm settings are.

-Xms30720m -Xmx30720m -Xss256k -XX:+UseParNewGC -XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly
-XX:+HeapDumpOnOutOfMemoryError

We are running 0.90.5

Thanks for getting back to me.

On 9/26/13 11:53 PM, Boaz Leskes wrote:

Hi Daveo,

Are you using the routing parameter in your searches? If not it's
highly unlikely search distribution wouldn't be evenly spread across
shards. What is possible is that a node contains more shards then others.

It feels like the node is doing something else or got stuck on an
unlucky query - to verify - can you please post the result of the hot
threads API (
Elasticsearch Platform — Find real-time answers at scale | Elastic) on this node? this will help figure out what it's doing? Also - can post
the memory usage on that node?

Cheers,
Boaz

On Thursday, September 26, 2013 10:34:37 PM UTC+2, da...@weheartit.comwrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per
shard?

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to a topic in the
Google Groups "elasticsearch" group.
To unsubscribe from this topic, visit
https://groups.google.com/d/topic/elasticsearch/XISMbne5eRg/unsubscribe.
To unsubscribe from this group and all its topics, send an email to
elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi David,

SPM for Elasticaearch -
Elasticsearch Monitoring should let
you see all this stuff visually, as time series, so you can see changes
over time.

Otis

On Thursday, September 26, 2013 4:34:37 PM UTC-4, da...@weheartit.com wrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Result:
I manually moved shards around, attempting to get an even mix of primary
and secondary shards per box.
Now I have a semi even load on my data nodes.
The unfortunate part of this was that it took manual intervention and I was
moving shards by guessing as I had no information around # of queries per
shard.

On Thursday, September 26, 2013 1:34:37 PM UTC-7, da...@weheartit.com wrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

David,

Check the attachment. That's how you can see which shard is on which host
and how big it is.

Otis

ELASTICSEARCH Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service
Search Analytics - Cloud Monitoring Tools & Services | Sematext

On Thursday, October 3, 2013 5:36:46 PM UTC-4, da...@weheartit.com wrote:

Result:
I manually moved shards around, attempting to get an even mix of primary
and secondary shards per box.
Now I have a semi even load on my data nodes.
The unfortunate part of this was that it took manual intervention and I
was moving shards by guessing as I had no information around # of queries
per shard.

On Thursday, September 26, 2013 1:34:37 PM UTC-7, da...@weheartit.comwrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

Ah, I see you were asking about the number of queries on each shard, not
the # of docs. I won't mark up a new screenshot, but if you look at the
one I sent you'll see a tab labeled "Search" where you can get this info.

Otis

On Tuesday, October 15, 2013 1:11:08 AM UTC-4, Otis Gospodnetic wrote:

David,

Check the attachment. That's how you can see which shard is on which host
and how big it is.

Otis

ELASTICSEARCH Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service
Search Analytics - Cloud Monitoring Tools & Services | Sematext

On Thursday, October 3, 2013 5:36:46 PM UTC-4, da...@weheartit.com wrote:

Result:
I manually moved shards around, attempting to get an even mix of primary
and secondary shards per box.
Now I have a semi even load on my data nodes.
The unfortunate part of this was that it took manual intervention and I
was moving shards by guessing as I had no information around # of queries
per shard.

On Thursday, September 26, 2013 1:34:37 PM UTC-7, da...@weheartit.comwrote:

I have read many posts in this group about uneven load and hot shards.
We are experiencing the same symptoms where one data node out of 8 has
100% CPU usage and the other 7 nodes operate at 40%.

My question is how do I know identify the volume of searches per shard?

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.