Nodes crash when index reaches 50gb so I would like to auto create indexes is this possible?

Hey all,

I noticed that my nodes crash when my index grows over 50GB, so I would like to create a new index when it reaches 50GB then assign the events-active alias to the new index, and remove that alias from the old index.

Would this be possible with Elasticsearch itself, and if so, how?

Which version of Elasticsearch are you using? What does the Elasticsearch logs say? What is the specification of the cluster?

version 7.17.4, the sysops team didn't get any logs, what do you mean by "specification of the cluster"

we have 3 masters and 5 data nodes, I was told to automate the process that's all xd, this happens maybe once a month

How is the cluster deployed? How come no logs are available?

Elasticsearch logs generally contain vital information in situations like this.

How much CPU, RAM and heap do the different node types have? What type of storage is being used?

How many nodes crashed? How did this happen? Do you have any monitoring in place?

It sounds odd that a single index going above a specific size would crash the node. Are you using any mapping types that may drive memory usage?

So the data nodes have 30GB ram each, and the master nodes have 8GB ram each. I think they have 4 vcores each, I am not sure about the cores as I don't have access to our AWS environment yet.

and apparently, the nodes don't crash, my bad, the index just dies cuz I'm assuming it's hitting our storage limit.

So I want to make a new index automatically and move the old index into cold storage maybe.

What is the heap size?

Indices do not die. They may however go read-only if you fill up your storage.

How much data does the cluster hold? How much storage does each data node have?

Does the index hold immutable data?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.