Spikes in usage and Logstash connection issues

Hi, I have a 5 node cluster that I'm using as part of an ELK system. Most
of the time it works great but today we saw a spike in writes for one of
the nodes and around the same time we saw indexing on that node spike up
too -- which makes sense, if you have more writes you'll need to do more
indexing. But none of the other servers were all that taxed, we don't have
anything else writing to Elasticsearch other than Logstash and normally it
does a pretty good job of load balancing. Any idea where I could start
looking for clues? I was looking through the logs but there doesn't seem to
be much information in there, most of it are just debug errors that say
something like this: [logstash-2015.02.13][3] failed to execute bulk item
(index) index {[logstash-2015.02.13] but they show up pretty consistently
so it doesn't seem like anything to be worried about. Where else could I
look to see why only one server is getting all the writes and how do I
mitigate it if this is happening because it ended up making Elasticsearch
unresponsive to information being sent from Logstash.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Can you link us to your LS config?

On 14 February 2015 at 08:56, rhea ghosh rhea.ghosh@gmail.com wrote:

Hi, I have a 5 node cluster that I'm using as part of an ELK system. Most
of the time it works great but today we saw a spike in writes for one of
the nodes and around the same time we saw indexing on that node spike up
too -- which makes sense, if you have more writes you'll need to do more
indexing. But none of the other servers were all that taxed, we don't have
anything else writing to Elasticsearch other than Logstash and normally it
does a pretty good job of load balancing. Any idea where I could start
looking for clues? I was looking through the logs but there doesn't seem to
be much information in there, most of it are just debug errors that say
something like this: [logstash-2015.02.13][3] failed to execute bulk item
(index) index {[logstash-2015.02.13] but they show up pretty consistently
so it doesn't seem like anything to be worried about. Where else could I
look to see why only one server is getting all the writes and how do I
mitigate it if this is happening because it ended up making Elasticsearch
unresponsive to information being sent from Logstash.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X91b_aWeNfKvohezQ0gBJniFm-dG9a%2BHdcWb1JSz61BRg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.

I'm assuming you meant my elasticsearch config.

These are the options i have specified:
cluster.name: elasticEnvironmentName

node.name: Nodename

path.data: /path/to/data

bootstrap.mlockall: true

gateway.recover_after_nodes: 3

cluster.routing.allocation.node_concurrent_recoveries: 15

indices.recovery.max_bytes_per_sec: 500mb

discovery.zen.minimum_master_nodes: 3

discovery.zen.fd.ping_interval: 15s
discovery.zen.fd.ping_timeout: 60s
discovery.zen.fd.ping_retries: 5

index.search.slowlog.threshold.query.warn: 10s

index.search.slowlog.threshold.fetch.warn: 1s

index.indexing.slowlog.threshold.index.warn: 10s

indices.fielddata.cache.size: 50%
indices.breaker.fielddata.limit: 60%

cloud.aws.access_key: AWSAccessKey
cloud.aws.secret_key: AWSSecretKey

On Monday, February 16, 2015 at 1:43:59 AM UTC-6, Mark Walkom wrote:

Can you link us to your LS config?

On 14 February 2015 at 08:56, rhea ghosh <rhea....@gmail.com <javascript:>

wrote:

Hi, I have a 5 node cluster that I'm using as part of an ELK system. Most
of the time it works great but today we saw a spike in writes for one of
the nodes and around the same time we saw indexing on that node spike up
too -- which makes sense, if you have more writes you'll need to do more
indexing. But none of the other servers were all that taxed, we don't have
anything else writing to Elasticsearch other than Logstash and normally it
does a pretty good job of load balancing. Any idea where I could start
looking for clues? I was looking through the logs but there doesn't seem to
be much information in there, most of it are just debug errors that say
something like this: [logstash-2015.02.13][3] failed to execute bulk item
(index) index {[logstash-2015.02.13] but they show up pretty consistently
so it doesn't seem like anything to be worried about. Where else could I
look to see why only one server is getting all the writes and how do I
mitigate it if this is happening because it ended up making Elasticsearch
unresponsive to information being sent from Logstash.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/46fd8986-6f0b-4573-b4af-07e8a04c1eb0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

I did not, I mean LS (Logstash)

On 18 February 2015 at 07:42, rhea ghosh rhea.ghosh@gmail.com wrote:

I'm assuming you meant my elasticsearch config.

These are the options i have specified:
cluster.name: elasticEnvironmentName

node.name: Nodename

path.data: /path/to/data

bootstrap.mlockall: true

gateway.recover_after_nodes: 3

cluster.routing.allocation.node_concurrent_recoveries: 15

indices.recovery.max_bytes_per_sec: 500mb

discovery.zen.minimum_master_nodes: 3

discovery.zen.fd.ping_interval: 15s
discovery.zen.fd.ping_timeout: 60s
discovery.zen.fd.ping_retries: 5

index.search.slowlog.threshold.query.warn: 10s

index.search.slowlog.threshold.fetch.warn: 1s

index.indexing.slowlog.threshold.index.warn: 10s

indices.fielddata.cache.size: 50%
indices.breaker.fielddata.limit: 60%

cloud.aws.access_key: AWSAccessKey
cloud.aws.secret_key: AWSSecretKey

On Monday, February 16, 2015 at 1:43:59 AM UTC-6, Mark Walkom wrote:

Can you link us to your LS config?

On 14 February 2015 at 08:56, rhea ghosh rhea....@gmail.com wrote:

Hi, I have a 5 node cluster that I'm using as part of an ELK system.
Most of the time it works great but today we saw a spike in writes for one
of the nodes and around the same time we saw indexing on that node spike up
too -- which makes sense, if you have more writes you'll need to do more
indexing. But none of the other servers were all that taxed, we don't have
anything else writing to Elasticsearch other than Logstash and normally it
does a pretty good job of load balancing. Any idea where I could start
looking for clues? I was looking through the logs but there doesn't seem to
be much information in there, most of it are just debug errors that say
something like this: [logstash-2015.02.13][3] failed to execute bulk item
(index) index {[logstash-2015.02.13] but they show up pretty consistently
so it doesn't seem like anything to be worried about. Where else could I
look to see why only one server is getting all the writes and how do I
mitigate it if this is happening because it ended up making Elasticsearch
unresponsive to information being sent from Logstash.

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/
msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%
40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/2570e228-1d3e-48e0-af8d-0946b7c58197%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/46fd8986-6f0b-4573-b4af-07e8a04c1eb0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/46fd8986-6f0b-4573-b4af-07e8a04c1eb0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAEYi1X9nn3B_aTgrRdh3o6a2knRVqkun%3DDyfZUcy38vVzOvpFg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.