Is there capability to set size limitation by index for elasticsearch

Hi guys,

Here has a scenario like below:

There are many indexes with different name in my es cluster. Then total es data disk size is 1TB. And in order to avoid some indexes size huge extremely and impact all index indexing,

Q1: if es can set different size limitation for different indexes?
Q2: if Q1 is yes, if es can auto housekeep in time when almost reach the index size limiataion?

We have centralized all log type to one big es cluster with different index. Above is our bottleneck to go forward. Looking forward to your advise. Thanks.


I would recommend that you check out the Rollover API, which will allow you to roll over an index based on a criteria:

@dakrone, per review your suggestion, seems it only supports to carry out action when you run post request.

Let's suppose below scenario, now post one request to rollover, and it will rollover after 5 mins. Within this 5 mins, it reaches the threshold and could not insert anymore before next rollover reaches, which still have risk and press to the disk size usage. If es has auto/real time trigger the rollover action?

Not currently, though it is something we are investigating for the future. For now though, you could set up a cron job that called the _rollover API every 2 (or however many) minutes, then your client only has to use the alias and not worry about when the index is rolled over.

@dakrone, Hope we can find it at change log of future release. Thanks for your suggestion.

@lauea Until Elasticsearch gets this automated, you can automate it with Elasticsearch Curator.

@theuntergeek, I wanna es auto trigger the action when threshold reached. For curator, only can we run it by scheduler, which do not know real time es index size. Actually, we also put ES Curator in our eco system.

Anyway, let's look forward to the future release.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.