Problem with transfer Filebeat 6.1.3 > Logstash 6.1.3 > Elasticsearch 6.1.3

Hello!
Transfer log Filebeat > Elasticsearch without Logstash work fine.
When I add Logstash, I see in log:
[2018-06-18T08:42:00,801][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-tmp-pg_postgresql-9.5-main.log-2018.06.18", :_type=>"doc", :_routing=>nil, :pipeline=>"filebeat-6.1.1-postgresql-log-pipeline"}, #<LogStash::Event:0x5c3141ac>], :response=>{"index"=>{"_index"=>"filebeat-tmp-pg_postgresql-9.5-main.log-2018.06.18", "_type"=>"doc", "_id"=>"JbgOEmQBroNl8DFs2kuj", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

In index mapping for field host:

  "host": {
    "properties": {
      "name": {
        "type": "keyword",
        "ignore_above": 1024
      },
      "architecture": {
        "type": "keyword",
        "ignore_above": 1024
      },
      "id": {
        "type": "keyword",
        "ignore_above": 1024
      },
      "os": {
        "properties": {
          "family": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "version": {
            "type": "keyword",
            "ignore_above": 1024
          },
          "platform": {
            "type": "keyword",
            "ignore_above": 1024
          }
        }
      }
    }
  }

I don't understand how resolve this problem :disappointed_relieved:. Please, help me!

The error message comes from Elasticsearch, and is occurring because something is attempting to index a document whose host value is a concrete value (likely a string), when the index expects an object (from your mapping, it looks like it should have sub-keys like name, architecture, id, and os).

What does your Logstash pipeline configuration look like? If you include a stdout output plugin using the (poorly named) rubydebug codec, what do your events look like?

output {
  elasticsearch {
    # ...
  }
  stdout {
    codec => rubydebug
  }
}

Maybe I can disable sending field host?
My logstash config: /etc/logstash/conf.d/filebeat.conf

input {
  beats {
    port => 5044
  }
 }

output {
    elasticsearch {
        hosts => ["elastic1.server.com:9200" , "elastic2.server.com:9200" , "elastic3.server.com:9200"]
        index => "filebeat-tmp-pg_postgresql-9.5-main.log-%{+YYYY.MM.dd}"
        pipeline => "filebeat-6.1.1-postgresql-log-pipeline"
    }
}

It looks like your index was created with a new version of Beats that is implementing the ECS (Elastic Common Schema), but that the Logstash output isn't quite in the same shape :weary:

ECS has a host.name field, which corresponds to the host name being output from Logstash. By adding a Mutate filter with a rename directive, you can move the host field to host.name with the field-reference syntax as below to align with the schema that you already have in Elasticsearch:

filter {
  mutate {
    rename {
      "[host]" => "[host][name]"
    }
  }
}
1 Like

Can I update my Logstash to version, which support field host.name?

Same problem here.

I'm using FileBeat (6.1.3 & 6.3.0) -> Logstash 6.3.0 -> ElasticSearch 6.3.0.

FileBeat 6.1.3 sends host information as string -> e.g. "host":"ip-10-1-100-16"
meanwhile FileBeat 6.3.0 sends host information as object -> e.g. "host": { "name": "ip-10-1-100-230" }

In my case, as I also have beats information ( "beat": { "hostname": "ip-10-1-100-230" } }, I can ignore host information via:

filter {
  mutate {
    remove_field => [ "host" ]
  }
}

However, if you want to convert previous content into new one, you can do something similar to:

filter {
  if [beat][version] < "6.3" {
    mutate {
      replace => { "host" => { "name" => "%{host}" } }
    }
    json_encode {
      source => "host"
    }
  }
}

I also have the issue and following are the details of the issue.

We have upgraded filebeat and logstash to 6.3 as part of patching. 
#logstash -V
logstash 6.3.0

#filebeat version
filebeat version 6.3.0 (amd64), libbeat 6.3.0

we noticed that the format of host through logstash has changed as follows:
e.g. "host":"xxx.ood.ops"  to "host": { "name": "xxx.ood.ops" }


After doing the upgrade we are getting the following errors
 In logstash:
 Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"3320728454", :_index=>"logstash-2018.07.18", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x4e8bfcd8>], :response=>{"index"=>{"_index"=>"logstash-2018.07.18", "_type"=>"doc", "_id"=>"3320728454", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"object mapping for [host] tried to parse field [host] as object, but found a concrete value"}}}}

 In elasticsearch:
 [2018-07-18T17:01:04,903][DEBUG][o.e.a.b.TransportShardBulkAction] [logstash-2018.07.18][1] failed to execute bulk item (index) BulkShardRequest [[logstash-2018.07.18][1]] containing [11] requests

org.elasticsearch.index.mapper.MapperParsingException: object mapping for [host] tried to parse field [host] as object, but found a concrete value

 The json format as seen in kibana:
 {

"_index": "logstash-2018.07.18",
"_type": "doc",
"_id": "2114191840",
"_version": 102181,
"_score": null,
"_source": {
"prospector": {
"type": "log"
},
"host": {
"name": "xxx.ood.ops"
},
"source": "/var/log/sample.out",
"beat": {
"name": "xxx.ood.ops",
"hostname": "xxx.ood.ops",
"version": "6.3.0"
},
"input": {
"type": "log"
}

I tried the solution of renaming and following is the error:

[2018-07-18T16:01:10,186][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"4218739081", :_index=>"logstash-2018.07.18", :_type=>"doc", :_routing=>nil}, #LogStash::Event:0x14e8812e], :response=>{"index"=>{"_index"=>"logstash-2018.07.18", "_type"=>"doc", "_id"=>"4218739081", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host.name]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:57"}}}}}

Can you please help ?

What error do you see in the elasticsearch log when you see that error in the logstash log?

hi,
The elasticsaearch error is also right below the logstash error in my previous post.

As a work around we renamed host.name to hostname for now as the indexing is failing on [host] field.

mutate {
rename => { "[host][name]" => "[hostname]" }
}

Let me know if there is any other way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.