Right now I have a elk configuration where filebeat send logs to logstash and logstash applied a filter and send the information to elasticsearch, this configurations is generating to many shards on each index and some of this shards are unassigned. I was reading an a solution was reduce the amount of shard but I can't find how reduce the amount of shard for the current index and how apply this for the new indexes. the idea is decrease the amount of shards from 5 to 3.
I was wondering if maybe one of you had the same issue and could solve it.
I was reading and one solution is reduce the amount of shard in logstash index, Do you know how to edit the template index for logstash to force the use of less amount of shards?
before any change I update my elastic to 7.0.1 version, now the elasticsearch index for logstash has only 1 shard and have the same error "1 of 7 shards failed", checking the elasticsearch logs I found this error related with logstash index:
"Fielddata is disabled on text fields by default. Set fielddata=true on [host.hostname] in order to load fielddata in memory by uninverting the inverted index."
One solution could be modify the index but I create a new index every day so every day I have to modify the index, another solution could be modify the logstash template for elasticsearch but the true is that I don't know how to handle this, please your help if anyone know how manage this error.
I had exact same error when I upgraded 1 of 5 shards failed.
eventually it got clear by self I don't know what it was.
basically on 7.x they have default to one shard per index. rather in in previous version it was 5 shard.
reason seems valid. but I am not satisfy, though I am not an expert.
post your logstash config file here and we should be able to find out how to reduce indice/shard count.
Thanks for your response, I think I found the solution (a lucky break), I changed the way logstash send the information to elasticsearch adding the beat version:
With this change elasticasearch use filebeat _index template instead logstash _index template to save the data, so, elasticsearch save the data in a different way and the use of this index fix the problem.
The root cause was the ignore_above parameter of logstash _index template, when checked the logstash index mapping found that this parameter for host.hostname was set on 254 VS filebeat index where this value was 1024.
filebeat _index
GET filebeat-7.1.0-2019.05.23/_mapping/field/host.hostname
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.