Hi there,
Our Elasticsearch started running very slow. After every restart I am able to use Kibana but it gradually slows down and eventually I get timed out.
I found the following in the logs:
Sep 20 08:58:02 localhost dockerd[3952]: [2019-09-19T22:58:02,333][WARN ][o.e.m.j.JvmGcMonitorService] [TVag-Jr] [gc][old][1637][55] duration [10.9s], collections [1]/[11.6s], total [10.9s]/[18s], memory [3.7gb]->[3.4gb]/[3.9gb], all_pools {[young] [355.8mb]->[54.9mb]/[532.5mb]}{[survivor] [55.3mb]->[0b]/[66.5mb]}{[old] [3.3gb]->[3.3gb]/[3.3gb]}
So I tried to increase heap size but id didn't help.
Here are some information about our environment:
-There are three containers running Elasticsearch 5.5.1, Kibana 5.5.1 and Logstash 5.3.0.
-Rabbitmq 3.6.0 is configured on VM itself.
-VM has 8CPUs and 20GB of dedicated memory.
-Heap size is 10GB.
Cluster status:
[root@eap-elk01 dragan]# curl -XGET 'localhost:8010/_cluster/health?pretty'
{
"cluster_name" : "docker-cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 7233,
"active_shards" : 7233,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 7233,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 50.0
}
As you can see there are plenty of shards and I assume that that is the main cause of the issue.
Any help would be much appreciated!
Cheers