Too many shards, 1960 of them

Hi,
Originally like most users I had one index logstash-*. Since I have been implementing Elasticsearch, it seemed reasonable to define new indexes for each application or group of apps. This has led me to the above problem, I have too many indexes and too many shards, 1960.
For a given day I have 11 different indexes, the biggest is topbeat at 175GB and the smallest is a tomcat log index at 4MB.
The average is about 30GB each.
Is this a bad idea to break out my indexes per application? Should I put them all in one or two?
My rationale for breaking them up was that if I wanted different index retention for different indexes I could set that up. As it turns out I'm keeping everything the same amount of time.
Any thoughts on this?
Thanks,
Tim

It's a good idea.

Use the _shrink API to reduce them down a day or two after they have rolled over :slight_smile:

Currently I'm running ES 2.4 and doing the upgrade migration plugin. I'm getting this,
Total primary shardsℹ
In 5.x, a maximum of 1000 shards can be queried in a single request. This cluster has 1955 primary shards.

What can/should I do about this?

That only applies to shards being queried, eg through Kibana. It won't stop the upgrade :slight_smile:

Good, that makes sense. Thanks, for the reply.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.