Elasticsearch 2.4.6 Performance Optimizations

Hi everyone!

We have 3 nodes with 64 Cores, 128Gb of RAM and 3TB on SSD.

We are receiving more than 1k log messages per second. And we have 1494 indexes opened with 5 shards per index.
On Logstash and marvel agent sometimes shows bulk problems.

¿How we can optimize our stack?

Thanks.

That sounds like an awful lot of shards for that cluster size and data volume. Read this blog post about shards and sharding, as it provides some practical guidelines.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.