Indices Not Creating after 48 indices created

Hi

Am I facing the issues related to index creation recently. I'm running the ELK stack in AWS ubuntu EC2 instance.
I configured the Elasticsearch to create the index daily basis like this ( orchid-cakelog-pos-2023.09.07 ).
Currently I have 48 indices including system created indices. The issue is it's not creating the new indices after it reaches the 48 indices. I verified this by deleting one of indices from this 48, after that one new indices created. is there any limit for this, do I need to change anything.

Can anyone advice this...

Thanks in advance.

There is a limit on the number of shards per node, which per default is 1000 shards per node, but I doubt that you are reaching this.

It is more probably you are hiting some issues with disk space, depending on the free space on your disk you won't be able to index new data until you fix the disk space, but when this happen it will be logged.

How are you indexing your data? What do you have in Elasticsearch logs? What is your specs, like how big is the data disk of Elasticsearch and how is the usage of this disk?

@leandrojmp

here is my flow of how my logs sent to kibana..
Apache Logs -> Filebeat -> Logstash -> Elasticsearch -> Kibana.

I have the enough disk space on my server.. please check the screenshot below..
Screenshot (841)

As I said previously... If I delete the anyone of the indices from that 48 indices ( so the total count is now 47 right )... the new indices was creating. and once it's reached the count 48.. then it will won't create new one.

You have just 40 GB of free space, depending on what is the average size of your indices, this may be not enough.

If you reach 90% of disk usage, Elasticsearch will stop allocating any new data.

Can you provide more context? Share the result of the request GET _cat/shards in Kibana Dev Tools.

Also, are you using any Index Lifecycle Policy?

Is this response complete?

You mentioned you had issues with indices with this naming pattern: orchid-cakelog-pos-2023.09.07

But in the request you shared there is no index named orchid-cakelog-pos-*, so not sure if it was an example or the response you shared is incomplet.

Assuming that the response is complete, I see no issue, you have not many shards and also no big index.

Can you confirm where is the data path of Elasticsearch? Look into elasticsearch.yml for the path.data setting.

Then go to this path in your server and run df -h . and share the result.

Also, did you checked the logs of Logstash and Elasticsearch? Please check the logs for the moment you had issues and share them.

If you have 480 and not 48 indices the possibility that you are running into the 1000 shard limit is plausible (maybe some system indices are accounting for when determining the limit), although this should show up in the logs. As you have only a single node I would recommend you set the number of replicas to 0 for all indices using the update settings API.

I would also recommend you look into how you are sharding data as you have a lot of very small indices/shards, which is very ineffcient. If you have a reasonably long retention period it may make sense to switch to monthly indices and I would also recommend combining indices where possible.

This is your issue, you reached the shard limit of your cluster.

In your first post you mentioned that you had 48 indices, but in fact you have way more than 48 indices in your cluster.

You have too many small indices.

The imediate fix is to increase the number of max shards per node:

PUT _cluster/settings
{
  "persistent" : {
    "cluster.routing.allocation.total_shards_per_node" : 2000 
  }
}

But this is not a solution, you should try to decrease the number of shards and small indices in your cluster, for this you need to move away from daily indices.

1 Like

@leandrojmp @Christian_Dahlqvist

Thank you for your response. I will modify the index related settings.

Changing the settings will resolve the problem for now. Unless you know you will not hit the new limit and have efficient retention policy in place I would strongly recommend you change how you shard the data to avoid so many small indices and shards. I have on more than one occasion in the past seen users avoid fixing the root cause (too many small indices and shards) and just increase these settings as it is easier. At some point you may find that increasing the settings no longer works and that your cluster is inoperable. At that point it may be too late to fix the problem, and unless you have a snapshot in place that you can restore you may lose all of the data.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.