All field error - Could not index event to Elasticsearch

HI,
i have done an upgraded of my elkstack (elasticsearch, kibana, logstash, filebeat). Everything was fine yesterday. Now i am getting these error messages in logstash:

[2018-02-21T18:29:46,901][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.02.21", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x66c76646], :response=>{"index"=>{"_index"=>"logstash-2018.02.21", "_type"=>"doc", "_id"=>nil, "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"Failed to parse mapping [default]: [include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field.", "caused_by"=>{"type"=>"mapper_parsing_exception", "reason"=>"[include_in_all] is not allowed for indices created on or after version 6.0.0 as [_all] is deprecated. As a replacement, you can use an [copy_to] on mapping fields to create your own catch all field."}}}}}

In my logstash template i can see the _all field is set to true.
How can i fix this problem?

Thank you very much!

Regards,
Ahmet

You need to adjust your template.
And remove the _all field all together from it.

1 Like

Hi David,

how can i do this? Do i just need to update my template mapping? Or do i need to create a new tempalte?
Is there an easy way of doing it?

Thank you so much.

Regards,
Ahmet

Run an update template request and update your template.

1 Like

I posted this in Kibana GET /_template/logstash :

{
"logstash": {
"order": 0,
"version": 50001,
"index_patterns": [
"logstash-"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"default": {
"_all": {
"enabled": true,
"norms": false
},
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "
",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
],
"properties": {
"@timestamp": {
"type": "date",
"include_in_all": false
},
"@version": {
"type": "keyword",
"include_in_all": false
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
},
"aliases": {}
}
}

How can i update this. This is a template and not a regular index. Right?
I am a bit lost here, since i am the one who set this system up. I am also very new in elasticserach :slight_smile:

Thanks for the help :+1:

Yes you can update it with the PUT template API.
I think that if you delete it and restart logstash it will be created by logstash again

Hi David,

i've tried this:

PUT /_template/logstash
{
"order": 0,
"version": 50001,
"index_patterns": [
"logstash-"
],
"settings": {
"index": {
"refresh_interval": "5s"
}
},
"mappings": {
"default": {
"dynamic_templates": [
{
"message_field": {
"path_match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false
}
}
},
{
"string_fields": {
"match": "
",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
],
"properties": {
"@timestamp": {
"type": "date"
},
"@version": {
"type": "keyword"
},
"geoip": {
"dynamic": true,
"properties": {
"ip": {
"type": "ip"
},
"location": {
"type": "geo_point"
},
"latitude": {
"type": "half_float"
},
"longitude": {
"type": "half_float"
}
}
}
}
}
},
"aliases": {}

}

Now i am getting this:

[2018-02-22T10:03:42,661][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"unavailable_shards_exception", "reason"=>"[logstash-2018.02.22][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[logstash-2018.02.22][0]] containing [17] requests]"})

GET _cat/indices/logstash-2018.02.22
returns:
red open logstash-2018.02.22 4Z7sktgvRJKUEaZMDGeC5Q 5 1

Can you start from scratch or is it a production server?

Also please format your code with </> and not the citation icon.

unfortunately it is a production server :-S

Can you run:

GET /_cat/nodes?v
GET /_cat/indices?v
GET /_cat/health?v

Of course:

GET /_cat/nodes?v

ip        heap.percent ram.percent cpu load_1m load_5m load_15m node.role master name
127.0.0.1           55          98  12    0.32    0.20     0.29 mdi       *      shogun

GET /_cat/indices?v
see: https://paste.ee/p/JkRM4

GET /_cat/health?v

epoch      timestamp cluster  status node.total node.data shards  pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1519292668 10:44:28  logstash red             1         1   1752 1752    0    0     1751             0                  -                 50.0%

You have too many shards here for a single node.

Can you also run:

GET /_cluster/allocation/explain

Here it is:

{
  "index": "logstash-2017.09.23",
  "shard": 2,
  "primary": false,
  "current_state": "unassigned",
  "unassigned_info": {
    "reason": "CLUSTER_RECOVERED",
    "at": "2018-02-22T09:13:32.695Z",
    "last_allocation_status": "no_attempt"
  },
  "can_allocate": "no",
  "allocate_explanation": "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions": [
    {
      "node_id": "--eGugLCRjCZbQO-Fq0ICQ",
      "node_name": "shogun",
      "transport_address": "127.0.0.1:9300",
      "node_decision": "no",
      "deciders": [
        {
          "decider": "enable",
          "decision": "NO",
          "explanation": "no allocations are allowed due to cluster setting [cluster.routing.allocation.enable=none]"
        },
        {
          "decider": "same_shard",
          "decision": "NO",
          "explanation": "the shard cannot be allocated to the same node on which a copy of the shard already exists [[logstash-2017.09.23][2], node[--eGugLCRjCZbQO-Fq0ICQ], [P], s[STARTED], a[id=bxlffJrSTHKX9fXwg6ODXg]]"
        }
      ]
    }
  ]
}

Why cluster.routing.allocation.enable is set to none?

Can you also run:

GET /_nodes/stats?human

Also as I said you have too many indices and shards here.
Do you need to keep all those historical data around?

I did not set this setting. I don't know, why this has been set to none.

This is the output:
see: https://paste.ee/p/7FDMO

This elasticsearch instance stores our production server logs and needs to be kept.
How can i minimize shards? I had another issue, where you had ansewerd my question there too :slight_smile: .
I wanted to minimize shard numbers after i solve this problem here.
Currently our logs are not stored to elasticsearch anymore :cry:

Free space is 36.3gb on 200gb, so less than 20%. Not a problem but I prefer tell you.
At some point, you will hit Disk-based shard allocation | Elasticsearch Guide [8.11] | Elastic.

May be change the value of cluster.routing.allocation.enable to all. See: Cluster-level shard allocation | Elasticsearch Guide [8.11] | Elastic

May be you can "force" again allocating the missing shard with a cluster reroute call: Cluster reroute API | Elasticsearch Guide [8.11] | Elastic

allocate_empty_primary might help.

Worse case, you can remove all together the logstash-2017.09.23 index.

This elasticsearch instance stores our production server logs and needs to be kept.

That does not sound reasonable on a single node with only 4gb of HEAP IMHO.

How can i minimize shards?

Shrink API might help. But in general I'd suggest looking at the following resources about sizing:

I wanted to minimize shard numbers after i solve this problem here.

I'd may be reduce the number of replicas to 0 (which will not change anything) and then add a new server to share the load on multiple machines.
You can also think about closing old indices. They will consume less resources which might help.

1 Like

Hi David,
cluster.routing.allocation.enable: all in elasticsearch.yml was the solution.

Our ES instance is on an kvm based hypervisor with enough storage. So assigning more disk space is easy to do :slight_smile:
Thanks for the articles. They will be helpful optimizing our infrastructure.

Thank you so much for your help!

Many Greetings from Germany :handshake:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.