Am I facing the issues related to index creation recently. I'm running the ELK stack in AWS ubuntu EC2 instance.
I configured the Elasticsearch to create the index daily basis like this ( orchid-cakelog-pos-2023.09.07 ).
Currently I have 48 indices including system created indices. The issue is it's not creating the new indices after it reaches the 48 indices. I verified this by deleting one of indices from this 48, after that one new indices created. is there any limit for this, do I need to change anything.
There is a limit on the number of shards per node, which per default is 1000 shards per node, but I doubt that you are reaching this.
It is more probably you are hiting some issues with disk space, depending on the free space on your disk you won't be able to index new data until you fix the disk space, but when this happen it will be logged.
How are you indexing your data? What do you have in Elasticsearch logs? What is your specs, like how big is the data disk of Elasticsearch and how is the usage of this disk?
here is my flow of how my logs sent to kibana.. Apache Logs -> Filebeat -> Logstash -> Elasticsearch -> Kibana.
I have the enough disk space on my server.. please check the screenshot below..
As I said previously... If I delete the anyone of the indices from that 48 indices ( so the total count is now 47 right )... the new indices was creating. and once it's reached the count 48.. then it will won't create new one.
If you have 480 and not 48 indices the possibility that you are running into the 1000 shard limit is plausible (maybe some system indices are accounting for when determining the limit), although this should show up in the logs. As you have only a single node I would recommend you set the number of replicas to 0 for all indices using the update settings API.
I would also recommend you look into how you are sharding data as you have a lot of very small indices/shards, which is very ineffcient. If you have a reasonably long retention period it may make sense to switch to monthly indices and I would also recommend combining indices where possible.
Changing the settings will resolve the problem for now. Unless you know you will not hit the new limit and have efficient retention policy in place I would strongly recommend you change how you shard the data to avoid so many small indices and shards. I have on more than one occasion in the past seen users avoid fixing the root cause (too many small indices and shards) and just increase these settings as it is easier. At some point you may find that increasing the settings no longer works and that your cluster is inoperable. At that point it may be too late to fix the problem, and unless you have a snapshot in place that you can restore you may lose all of the data.