We are running Elasticsearch with three nodes. The problem we keep running
into is eventually Kibana begins to run extremely slow and soon after
becomes unresponsive. If you hit refresh the wheels just keep spinning and
nothing gets displayed. Restarting the cluster appears to correct this
issue, but within 24 hours it begins again. Below is our health:
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 4,
"number_of_data_nodes" : 3,
"active_primary_shards" : 341,
"active_shards" : 682,
"relocating_shards" : 2,
"initializing_shards" : 0,
"unassigned_shards" : 0
}
When we initially set everything up we had a single node and we did have a
span of three weeks where we didn't have to restart the Elasticsearch
service. When we begin to pump netflow data in we began to have issues. I
thought perhaps because we had logstash and ES running on one node that was
causing the issue. Thus we added two virtual nodes and had one of them
host the logstash for just Netflow. I thought with the clustering the
issue would be resolved, but sadly I still have to start and stop the
services everyday. When I look back at the data it flows in the entire
time until flat lining for a few hours and then picks up again once I
restart the services.
Thanks in advance!
--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d088f293-9dae-475a-a15e-076b3b773680%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.