Shards throughput for a big index?

Hi elastic family,

Before I ask, let me elaborate my scenario.
I centralize the logs to elasticsearch using logstash.
And I use monthly indices format.
Well it's been working properly.

As far as I know, limit of a shard is 50G.
At the moment, I use default shards throughput (5-shard in primary, 5-shard in secondary) for an index.
I read one post that 35 GB is good for each shard.
So I assume 200 GB is suitable for my monthly index.

My concern is that if logs become bigger then 200 GB a month, could be 300 GB for example.
Do I need to increase shards throughput like (7-shard in primary, 7-shard in secondary)?
What should I do? What is the best practice for such scenario?

Looking forward to replying...
Thanks so much

This is not true, there is no such limit on the size of a shard. It might be a good idea not to let your shards grow too large since larger shards take proportionally longer to recover, and 50GB is a reasonable target size, but it is not a limit.

Use as few primaries as possible (1 is the default in recent versions) and use ILM to rollover to a new index when it reaches the right size or age.

Thank you @DavidTurner for correcting my wrong knowledge, and advice.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.