This is using ILM, which you can tell by the counter on the end of the date in the index name. This is the default of Beats these days and what you should be using.
You might want to take a look at this article about shard sizing.
First Glance that is a lot of shards for a cluster of that size we typically recommend 10-20 shards per 1 GB of Heap. Your total cluster has ~1.5GB of heap (50% of 3GB Total) so you should only have ~30 Shards Total you have 10x that, that will negatively affect your performance.
It's possible this is contributing to your issue in your other thread.
Your deployment is currently running on JVM heap usage, hence the parent circuit breaker exception observed.
A couple of options here:
If you have unwanted / old indices that you are not using anymore, you can use the Delete API to delete these indices. Each index has 1 or many shards (depending on your index settings). Deleting indices can be done for example via the Kibana Console or via the API Console (c.f Access the Elasticsearch API console).
Scale-up your deployment to allocate more resources.
I cannot show you. As Kibana is down. I was in index Lifecycle management of the Index and edited the Hot Phase. There I enables the Shards section and set the number to 30 shards and saved.
ILM is not really the correct way to attempt to fix that. And you don't set the number of shards in ILM You set the index size before rollover so I am not sure what You really did.
If I were you I'd open a support ticket.
Once you get the cluster up and running you really should have left the default settings for the ILM an index templates which are
The defaults recommended for time series data are:
One primary shard
One replica shard
And in the hot phase of ILM
50 GB per index for rollover
However your cluster is tiny,
Since you only have 30 gigabytes storage of disc on each node obviously 50 GB shards won't work
You can make each index size 2GB or so. That would give you 15 or so indexes / shards per node.
So depending on what your actual data ingest and retention requirements are you may need to adjust the capacity / the size of your cluster.
Did you actually look at / read all the great docs that @ropc and I shared? There was a lot of great info in those.
IF you left the defaults for number of Primary and Replica shards to be 1 and 1 respectively then I would put that setting in ILM to 2 GB.
BTW if you changed the number of Primary shards in your mapping or index template you should set it back to 1 or take it out. If you do not set it 1 is the default.
Again your cluster is on the very small scale so these numbers are all a bit skewed.
With respect to the shards, where they are located they should be pretty much evenly distributed across the 3 nodes. I would not get too concerned about where they are
You can go to Kibana -> Dev Tools and run
GET _cat/shards/?v
To see where they are .
I highly recommend taking a look at the docs we sent, AND Elastic provide a lot of free training and webinars I would take advantage of those.
Its going to stay a bit high with such small nodes... quite literally there is only 512MB jvm on each node.. that is they smallest you can possibly run.
Remember, the goal is about 10-20 Shards per 1 GB JVM : you have 1.5GB JVM so the goal would be about 30 Shards to you are still 2X. Again these are not hard rules, and with such a limited amount of JVM RAM it going to be tight.
And in general your indices are still extremely small, each indices / shard takes memory space in the JVM.
I would suggest scaling the cluster up to 2GB or 4 GB nodes. You can do that from the deployment screen just hit edit change the setting and apply, it will take a few minutes it will do a rolling change with no down time.
Me... Smallest I ever run is 4GB nodes, but that is just me, you can try 2 GB nodes first.
You can scale these cluster up to HUGE Terabytes of RAM and 100s of TBs of storages, which you clearly do not need at this point.
Autoscale is based on disk usage today so that is not the correct way to fix this issue. (in the near future it will look at memory and CPU pressure.
You need to manually scale your cluster.
Go into the Elastic Cloud Console and click on your deployment.
Click Edit
Make the changes
And Save at the Bottom.
It will take a few minutes
Then Try your reindex again.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.