Reduce the amount of shards of index (Elasticsearch)

Hi All

Right now I have a elk configuration where filebeat send logs to logstash and logstash applied a filter and send the information to elasticsearch, this configurations is generating to many shards on each index and some of this shards are unassigned. I was reading an a solution was reduce the amount of shard but I can't find how reduce the amount of shard for the current index and how apply this for the new indexes. the idea is decrease the amount of shards from 5 to 3.

I was wondering if maybe one of you had the same issue and could solve it.

Thanks in advance for your help.

Regards.

Hi

I was reading and one solution is reduce the amount of shard in logstash index, Do you know how to edit the template index for logstash to force the use of less amount of shards?

Thanks.

Regards.

In elasticsearch you can edit a template using the template API.

So I'd read the existing template and then update it.

Tomorrow it will be picked up when a new index will be created and your new settings will be applied.

Hi All

before any change I update my elastic to 7.0.1 version, now the elasticsearch index for logstash has only 1 shard and have the same error "1 of 7 shards failed", checking the elasticsearch logs I found this error related with logstash index:

"Fielddata is disabled on text fields by default. Set fielddata=true on [host.hostname] in order to load fielddata in memory by uninverting the inverted index."

One solution could be modify the index but I create a new index every day so every day I have to modify the index, another solution could be modify the logstash template for elasticsearch but the true is that I don't know how to handle this, please your help if anyone know how manage this error.

Thanks.

Regards.

I'm moving your question to #logstash as I believe you'll find a better help there.

1 Like

I had exact same error when I upgraded 1 of 5 shards failed.
eventually it got clear by self I don't know what it was.

basically on 7.x they have default to one shard per index. rather in in previous version it was 5 shard.
reason seems valid. but I am not satisfy, though I am not an expert.

post your logstash config file here and we should be able to find out how to reduce indice/shard count.

Hi

Thanks for your answer, this is my logstash configuration:

input {
beats {
port => 5044
}
}
filter {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp}"]
}
date {
match => ["timestamp", "ISO8601"]
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
index => "%{[@metadata][beat]}-7-%{+YYYY.MM.dd}"
}
}

Luise post some index name that appears in your systems.
I am trying to see what %{[@metadata][beats]} is bringing

the way I have configure is
myindex-%{+YYYY.MM}

and then index template = indexname-*

now I have myindex-2019.04, myindex-2019.05
and when I delete one index I have no problem on my vizulization

Hi

Thanks for your response, I think I found the solution (a lucky break), I changed the way logstash send the information to elasticsearch adding the beat version:

"%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

With this change elasticasearch use filebeat _index template instead logstash _index template to save the data, so, elasticsearch save the data in a different way and the use of this index fix the problem.

The root cause was the ignore_above parameter of logstash _index template, when checked the logstash index mapping found that this parameter for host.hostname was set on 254 VS filebeat index where this value was 1024.

filebeat _index

GET filebeat-7.1.0-2019.05.23/_mapping/field/host.hostname

{
"filebeat-7.1.0-2019.05.23" : {
"mappings" : {
"host.hostname" : {
"full_name" : "host.hostname",
"mapping" : {
"hostname" : {
"type" : "keyword",
"ignore_above" : 1024
}
}
}
}
}
}

logstash _index

GET filebeat-7-2019.05.23/_mapping/field/host.hostname

{
"filebeat-7-2019.05.23" : {
"mappings" : {
"host.hostname" : {
"full_name" : "host.hostname",
"mapping" : {
"hostname" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.