We have 3 nodes with 64 Cores, 128Gb of RAM and 3TB on SSD.
We are receiving more than 1k log messages per second. And we have 1494 indexes opened with 5 shards per index.
On Logstash and marvel agent sometimes shows bulk problems.
That sounds like an awful lot of shards for that cluster size and data volume. Read this blog post about shards and sharding, as it provides some practical guidelines.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.