Poor performance Elasticsearch

Hi, I'm having problem with my elasticsearch running server. At the beginning it was working fine, when I needed to update Data. But suddenly after trying to update several files at once (5 files), the system is performing poorly, taking more than 7 or 8 hours to process a file, when previously is was 1 hour. Also, the performance problem has reached Kibana (it takes 2 minutes to load the main page with date and previously it was less than a second). I think the problem is because of the heap ( which is configured as -Xms2g -Xmx8g) and, for example, Kibana is using 99%. I tried to delete all the indices from the node and the problem persist.
I tried to set up bootstrap.mlockall: true, but the server that's allow it (Unable to lock JVM Memory: error=12,reason=Cannot allocate memory).
The server has 8GB of RAM.

This is the current status of the health state cluster:
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 6,
"active_shards" : 6,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0

You should follow these guidelines when setting the heap size.

Before posting this, the heap size was set up 2GB min and 8 GB max. Now I've changed it according to your link with 4GB max, but the performance is the same.

What kind of storage do you have? What does disk I/O and iowait look like when you are updating? How are you performing the updates?

right now the performance when executing filebeat is:

root 20 0 4604800 476744 18296 S 148.5 5.8 3:01.82 java

elastic+ 20 0 8042688 687640 19948 S 48.5 8.4 1:18.83. java

root 20 0 379012 27884 8644 S 30.9 0.3 0:15.81 filebeat

Can you run iostat -x while the update is going on?

What type of disk do you have?

How are you performing the updates?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.