Hi, I have a 5 node cluster in a lab environment. 2 data nodes, 3 master eligible nodes, 3 kibana, and 2 logstash. It's a low volume setup with about a half a dozen systems feed data to it. It has the default setup with 5 primary and 1 replicate shards per index. However my indices are about 2-8 mb in size, so for about 36 days, I have a ton of shards. How can I reduce the number of primary shards down to 1 primary and 1 replicate?
Thanks for the suggestion.
I have a Ossec/Wazuh manager server with filebeat sending alerts to Logstash. I'm not sure where to add the line for defining the number of shards, primary and replica. This is the template from the Wazuh group.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.