Interesting Issue

We are running Elasticsearch with three nodes. The problem we keep running
into is eventually Kibana begins to run extremely slow and soon after
becomes unresponsive. If you hit refresh the wheels just keep spinning and
nothing gets displayed. Restarting the cluster appears to correct this
issue, but within 24 hours it begins again. Below is our health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have a
span of three weeks where we didn't have to restart the Elasticsearch
service. When we begin to pump netflow data in we began to have issues. I
thought perhaps because we had logstash and ES running on one node that was
causing the issue. Thus we added two virtual nodes and had one of them
host the logstash for just Netflow. I thought with the clustering the
issue would be resolved, but sadly I still have to start and stop the
services everyday. When I look back at the data it flows in the entire
time until flat lining for a few hours and then picks up again once I
restart the services.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko johnluko@gmail.com a écrit :

We are running Elasticsearch with three nodes. The problem we keep running into is eventually Kibana begins to run extremely slow and soon after becomes unresponsive. If you hit refresh the wheels just keep spinning and nothing gets displayed. Restarting the cluster appears to correct this issue, but within 24 hours it begins again. Below is our health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have a span of three weeks where we didn't have to restart the Elasticsearch service. When we begin to pump netflow data in we began to have issues. I thought perhaps because we had logstash and ES running on one node that was causing the issue. Thus we added two virtual nodes and had one of them host the logstash for just Netflow. I thought with the clustering the issue would be resolved, but sadly I still have to start and stop the services everyday. When I look back at the data it flows in the entire time until flat lining for a few hours and then picks up again once I restart the services.

Thanks in advance!

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/80DCF447-DF89-4CD1-A52C-0DF21D14FC37%40pilato.fr.
For more options, visit https://groups.google.com/d/optout.

Is there a particular log I should look at? In elasticsearch.log I am
seeing a lot of parser failures for a timestamp.

On Saturday, June 14, 2014 12:26:47 PM UTC-4, David Pilato wrote:

First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko <john...@gmail.com <javascript:>> a
écrit :

We are running Elasticsearch with three nodes. The problem we keep
running into is eventually Kibana begins to run extremely slow and soon
after becomes unresponsive. If you hit refresh the wheels just keep
spinning and nothing gets displayed. Restarting the cluster appears to
correct this issue, but within 24 hours it begins again. Below is our
health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have a
span of three weeks where we didn't have to restart the Elasticsearch
service. When we begin to pump netflow data in we began to have issues. I
thought perhaps because we had logstash and ES running on one node that was
causing the issue. Thus we added two virtual nodes and had one of them
host the logstash for just Netflow. I thought with the clustering the
issue would be resolved, but sadly I still have to start and stop the
services everyday. When I look back at the data it flows in the entire
time until flat lining for a few hours and then picks up again once I
restart the services.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

May be you should gist your logs and copy the link here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

Le 14 juin 2014 à 18:44:07, John Luko (johnluko@gmail.com) a écrit:

Is there a particular log I should look at? In elasticsearch.log I am seeing a lot of parser failures for a timestamp.

On Saturday, June 14, 2014 12:26:47 PM UTC-4, David Pilato wrote:
First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko john...@gmail.com a écrit :

We are running Elasticsearch with three nodes. The problem we keep running into is eventually Kibana begins to run extremely slow and soon after becomes unresponsive. If you hit refresh the wheels just keep spinning and nothing gets displayed. Restarting the cluster appears to correct this issue, but within 24 hours it begins again. Below is our health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have a span of three weeks where we didn't have to restart the Elasticsearch service. When we begin to pump netflow data in we began to have issues. I thought perhaps because we had logstash and ES running on one node that was causing the issue. Thus we added two virtual nodes and had one of them host the logstash for just Netflow. I thought with the clustering the issue would be resolved, but sadly I still have to start and stop the services everyday. When I look back at the data it flows in the entire time until flat lining for a few hours and then picks up again once I restart the services.

Thanks in advance!

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/etPan.539c8021.79e2a9e3.a69e%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.

Sorry I am a n00b when it comes to ES. How would I go about that?

On Saturday, June 14, 2014 1:02:45 PM UTC-4, David Pilato wrote:

May be you should gist your logs and copy the link here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr

Le 14 juin 2014 à 18:44:07, John Luko (john...@gmail.com <javascript:>) a
écrit:

Is there a particular log I should look at? In elasticsearch.log I am
seeing a lot of parser failures for a timestamp.

On Saturday, June 14, 2014 12:26:47 PM UTC-4, David Pilato wrote:

First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko john...@gmail.com a écrit :

We are running Elasticsearch with three nodes. The problem we keep
running into is eventually Kibana begins to run extremely slow and soon
after becomes unresponsive. If you hit refresh the wheels just keep
spinning and nothing gets displayed. Restarting the cluster appears to
correct this issue, but within 24 hours it begins again. Below is our
health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have
a span of three weeks where we didn't have to restart the Elasticsearch
service. When we begin to pump netflow data in we began to have issues. I
thought perhaps because we had logstash and ES running on one node that was
causing the issue. Thus we added two virtual nodes and had one of them
host the logstash for just Netflow. I thought with the clustering the
issue would be resolved, but sadly I still have to start and stop the
services everyday. When I look back at the data it flows in the entire
time until flat lining for a few hours and then picks up again once I
restart the services.

Thanks in advance!

You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6908936b-02e7-46f9-873d-9fe3f2d1d152%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Upload your logs on gist.github.com and paste the link here.

May be some more details here: http://www.elasticsearch.org/help/

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

Le 14 juin 2014 à 19:06:46, John Luko (johnluko@gmail.com) a écrit:

Sorry I am a n00b when it comes to ES. How would I go about that?

On Saturday, June 14, 2014 1:02:45 PM UTC-4, David Pilato wrote:
May be you should gist your logs and copy the link here?

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet | @elasticsearchfr

Le 14 juin 2014 à 18:44:07, John Luko (john...@gmail.com) a écrit:

Is there a particular log I should look at? In elasticsearch.log I am seeing a lot of parser failures for a timestamp.

On Saturday, June 14, 2014 12:26:47 PM UTC-4, David Pilato wrote:
First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko john...@gmail.com a écrit :

We are running Elasticsearch with three nodes. The problem we keep running into is eventually Kibana begins to run extremely slow and soon after becomes unresponsive. If you hit refresh the wheels just keep spinning and nothing gets displayed. Restarting the cluster appears to correct this issue, but within 24 hours it begins again. Below is our health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have a span of three weeks where we didn't have to restart the Elasticsearch service. When we begin to pump netflow data in we began to have issues. I thought perhaps because we had logstash and ES running on one node that was causing the issue. Thus we added two virtual nodes and had one of them host the logstash for just Netflow. I thought with the clustering the issue would be resolved, but sadly I still have to start and stop the services everyday. When I look back at the data it flows in the entire time until flat lining for a few hours and then picks up again once I restart the services.

Thanks in advance!

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6908936b-02e7-46f9-873d-9fe3f2d1d152%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/etPan.539c8371.12200854.a69e%40MacBook-Air-de-David.local.
For more options, visit https://groups.google.com/d/optout.

Thanks for the help thus far! I'll be posting the logs on Monday when I
get in the office, but in the mean time it has happened again. When I went
to Kibana I got an actual error that is pointing me in the right direction
I believe:

Oops! SearchPhaseExecutionException[Failed to execute phase [query], all
shards failed; shardFailures
{[CN48pxdyTUWBc8Iyft5vpQ][logstash-2014.06.15][2]:
EsRejectedExecutionException[rejected execution (queue capacity 1000) on
org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction$4@b597ac6]}]

We have I believe 24 cores on our main box so I figure I can set the thread
pool size to 120 and the queue size to -1?

On Saturday, June 14, 2014 1:16:52 PM UTC-4, David Pilato wrote:

Upload your logs on gist.github.com and paste the link here.

May be some more details here: Elasticsearch Platform — Find real-time answers at scale | Elastic

--
David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr

Le 14 juin 2014 à 19:06:46, John Luko (john...@gmail.com <javascript:>) a
écrit:

Sorry I am a n00b when it comes to ES. How would I go about that?

On Saturday, June 14, 2014 1:02:45 PM UTC-4, David Pilato wrote:

May be you should gist your logs and copy the link here?

 -- 

David Pilato | Technical Advocate | Elasticsearch.com
@dadoonet https://twitter.com/dadoonet | @elasticsearchfr
https://twitter.com/elasticsearchfr

Le 14 juin 2014 à 18:44:07, John Luko (john...@gmail.com) a écrit:

Is there a particular log I should look at? In elasticsearch.log I am
seeing a lot of parser failures for a timestamp.

On Saturday, June 14, 2014 12:26:47 PM UTC-4, David Pilato wrote:

First guess is that you are running out of memory or so.
What can you see in elasticsearch logs?

Don't you see some WARN about GC?

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 14 juin 2014 à 18:06, John Luko john...@gmail.com a écrit :

We are running Elasticsearch with three nodes. The problem we keep
running into is eventually Kibana begins to run extremely slow and soon
after becomes unresponsive. If you hit refresh the wheels just keep
spinning and nothing gets displayed. Restarting the cluster appears to
correct this issue, but within 24 hours it begins again. Below is our
health:

{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}

When we initially set everything up we had a single node and we did have
a span of three weeks where we didn't have to restart the Elasticsearch
service. When we begin to pump netflow data in we began to have issues. I
thought perhaps because we had logstash and ES running on one node that was
causing the issue. Thus we added two virtual nodes and had one of them
host the logstash for just Netflow. I thought with the clustering the
issue would be resolved, but sadly I still have to start and stop the
services everyday. When I look back at the data it flows in the entire
time until flat lining for a few hours and then picks up again once I
restart the services.

Thanks in advance!

You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/9a357da0-9ac6-4aaf-853d-f76f453c27d0%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6908936b-02e7-46f9-873d-9fe3f2d1d152%40googlegroups.com
https://groups.google.com/d/msgid/elasticsearch/6908936b-02e7-46f9-873d-9fe3f2d1d152%40googlegroups.com?utm_medium=email&utm_source=footer
.
For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/42b28459-4f9a-4a80-9b7d-5d75f63d8fd4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.