HI,
i have done an upgraded of my elkstack (elasticsearch, kibana, logstash, filebeat). Everything was fine yesterday. Now i am getting these error messages in logstash:
[2018-02-21T18:29:46,901][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.02.21", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x66c76646], :response=>{"index"=>{"_index"=>"logstash-2018.02.21", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [default]: [include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"[include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."}}}}}
In my logstash template i can see the _all field is set to true.
How can i fix this problem?
How can i update this. This is a template and not a regular index. Right?
I am a bit lost here, since i am the one who set this system up. I am also very new in elasticserach
{
"index": "logstash-2017.09.23",
"shard": 2,
"primary": false,
"current_state": "unassigned",
"unassigned_info": {
"reason": "CLUSTER_RECOVERED",
"at": "2018-02-22T09:13:32.695Z",
"last_allocation_status": "no_attempt"
},
"can_allocate": "no",
"allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
"node_allocation_decisions": [
{
"node_id": "--eGugLCRjCZbQO-Fq0ICQ",
"node_name": "shogun",
"transport_address": "127.0.0.1:9300",
"node_decision": "no",
"deciders": [
{
"decider": "enable",
"decision": "NO",
"explanation": "no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]"
},
{
"decider": "same_shard",
"decision": "NO",
"explanation": "the shard cannot be allocated to the same node on which a copy of the shard already exists [[logstash-2017.09.23][2], node[--eGugLCRjCZbQO-Fq0ICQ], [P], s[STARTED], a[id=bxlffJrSTHKX9fXwg6ODXg]]"
}
]
}
]
}
This elasticsearch instance stores our production server logs and needs to be kept.
How can i minimize shards? I had another issue, where you had ansewerd my question there too .
I wanted to minimize shard numbers after i solve this problem here.
Currently our logs are not stored to elasticsearch anymore
Worse case, you can remove all together the logstash-2017.09.23 index.
This elasticsearch instance stores our production server logs and needs to be kept.
That does not sound reasonable on a single node with only 4gb of HEAP IMHO.
How can i minimize shards?
Shrink API might help. But in general I'd suggest looking at the following resources about sizing:
I wanted to minimize shard numbers after i solve this problem here.
I'd may be reduce the number of replicas to 0 (which will not change anything) and then add a new server to share the load on multiple machines.
You can also think about closing old indices. They will consume less resources which might help.
Hi David, cluster.routing.allocation.enable: all in elasticsearch.yml was the solution.
Our ES instance is on an kvm based hypervisor with enough storage. So assigning more disk space is easy to do
Thanks for the articles. They will be helpful optimizing our infrastructure.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.