Some performance checking questions

Hello again.

(Otis, yes, this weekend we are putting in sematext's monitor to try and
help us out with the following!)

We are trying to just debug slow updates to indexes and slow searches.

anything to help what we should look for would be helpful.

1 - the cpus are working minimally on the 3 of the 4 nodes that we have
2 - if there is anything that i can guess at (until we add more monitoring
this weekend) is that our primary node seems to be taking all of the
updates and queries.
could this be a results of drivers not using the other nodes? or, does
all traffic supposed to go to the primary node?
3 - anything to turn on in elastic that would help track down usage
patterns (we have multiple tools accessing the nodes) and see how we find
the culprit?

Hoping the sematext monitor of queries per node, and throughput will help
analyze some of this as well.

If we were to use something like YourKit to analyze, anything that would be
a good pointer to why there are slow downs?

Thanks,
Scott

Hey,

Are you by any chance using rivers? and, what does your shard/replica setup
look like? Single indice, or multiple indicies? And is it slow on the same
query randomly? or just at startup? perhaps over different types of
queries?

Patrick

patrick eefy net

On Fri, Jun 1, 2012 at 12:04 PM, Scott Decker scott@publishthis.com wrote:

Hello again.

(Otis, yes, this weekend we are putting in sematext's monitor to try and
help us out with the following!)

We are trying to just debug slow updates to indexes and slow searches.

anything to help what we should look for would be helpful.

1 - the cpus are working minimally on the 3 of the 4 nodes that we have
2 - if there is anything that i can guess at (until we add more monitoring
this weekend) is that our primary node seems to be taking all of the
updates and queries.
could this be a results of drivers not using the other nodes? or, does
all traffic supposed to go to the primary node?
3 - anything to turn on in elastic that would help track down usage
patterns (we have multiple tools accessing the nodes) and see how we find
the culprit?

Hoping the sematext monitor of queries per node, and throughput will help
analyze some of this as well.

If we were to use something like YourKit to analyze, anything that would
be a good pointer to why there are slow downs?

Thanks,
Scott

nope, no rivers.

4 servers, 4 cpu, 8 gigs of ram each

our indexes are 4 shard 1 replica

usually slow on queries. not random.

I am just guessing that we have something misconfigured somewhere, just
don't know where! Hopefully more monitoring will help track it down.
Just wanted to see if there was anything that people could think of to
narrow down a search into what could be the issue of 1 server taking the
mass of load of updates/queries, instead of it being distributed to all
nodes.

On Friday, June 1, 2012 12:10:50 PM UTC-7, Patrick Ancillotti wrote:

Hey,

Are you by any chance using rivers? and, what does your shard/replica
setup look like? Single indice, or multiple indicies? And is it slow on the
same query randomly? or just at startup? perhaps over different types of
queries?

Hello again.

(Otis, yes, this weekend we are putting in sematext's monitor to try and
help us out with the following!)

We are trying to just debug slow updates to indexes and slow searches.

anything to help what we should look for would be helpful.

1 - the cpus are working minimally on the 3 of the 4 nodes that we have
2 - if there is anything that i can guess at (until we add more
monitoring this weekend) is that our primary node seems to be taking all of
the updates and queries.
could this be a results of drivers not using the other nodes? or, does
all traffic supposed to go to the primary node?
3 - anything to turn on in elastic that would help track down usage
patterns (we have multiple tools accessing the nodes) and see how we find
the culprit?

Hoping the sematext monitor of queries per node, and throughput will help
analyze some of this as well.

If we were to use something like YourKit to analyze, anything that would
be a good pointer to why there are slow downs?

Thanks,
Scott

What about facets? The field cache can be a major slowdown of queries.
How much memory are you allocating to Elasticsearch?

BTW, Otis is probably on a plane right now heading to Berlin for
Berlin Buzzwords. He helped me out this week to get SPM up and
running.

--
Ivan

On Fri, Jun 1, 2012 at 6:54 PM, Scott Decker scott@publishthis.com wrote:

nope, no rivers.

4 servers, 4 cpu, 8 gigs of ram each

our indexes are 4 shard 1 replica

usually slow on queries. not random.

I am just guessing that we have something misconfigured somewhere, just
don't know where! Hopefully more monitoring will help track it down.
Just wanted to see if there was anything that people could think of to
narrow down a search into what could be the issue of 1 server taking the
mass of load of updates/queries, instead of it being distributed to all
nodes.

On Friday, June 1, 2012 12:10:50 PM UTC-7, Patrick Ancillotti wrote:

Hey,

Are you by any chance using rivers? and, what does your shard/replica
setup look like? Single indice, or multiple indicies? And is it slow on the
same query randomly? or just at startup? perhaps over different types of
queries?

Hello again.

(Otis, yes, this weekend we are putting in sematext's monitor to try and
help us out with the following!)

We are trying to just debug slow updates to indexes and slow searches.

anything to help what we should look for would be helpful.

1 - the cpus are working minimally on the 3 of the 4 nodes that we have
2 - if there is anything that i can guess at (until we add more
monitoring this weekend) is that our primary node seems to be taking all of
the updates and queries.
could this be a results of drivers not using the other nodes? or, does
all traffic supposed to go to the primary node?
3 - anything to turn on in elastic that would help track down usage
patterns (we have multiple tools accessing the nodes) and see how we find
the culprit?

Hoping the sematext monitor of queries per node, and throughput will help
analyze some of this as well.

If we were to use something like YourKit to analyze, anything that would
be a good pointer to why there are slow downs?

Thanks,
Scott