Object mapping for [host] tried to parse field [host] as object, but found a concrete value

We have just upgraded Kibana, Logstash, Filebeat and Elasticseach from 6.3 to 7.1.1 and I have been trying to solve the issue in the following logs /var/log/elasticsearch/elasticsearch.log:

[2019-06-04T09:33:31,761][DEBUG][o.e.a.b.TransportShardBulkAction] [hostname] [filebeat-6.3.0-2019.06.04][0] failed to execute bulk item (index) index {[filebeat-6.3.0-2019.06.04][doc][mNGxImsBFBvUZWvX4i_w], source[{"input":{"type":"log"},"clientIP":"144.123.1234.123","status":"200","@timestamp":"2019-06-04T13:33:25.559Z","timestamp":"04/Jun/2019:09:33:10 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n172.16.4.3 - - [04/Jun/2019:09:33:11 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n144.123.1234.123 - - [04/Jun/2019:09:33:13 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n172.16.4.3 - - [04/Jun/2019:09:33:13 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n144.123.1234.123 - - [04/Jun/2019:09:33:15 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n172.16.4.3 - - [04/Jun/2019:09:33:15 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n144.123.1234.123 - - [04/Jun/2019:09:33:17 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n144.123.1234.123 - - [04/Jun/2019:09:33:17 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n144.123.1234.123 - - [04/Jun/2019:09:33:19 -0400] \"GET /lb/healthcheck.html HTTP/1.0\" 200 2\n162.209.84.28 - - [04/Jun/2019:09:33:19 -0400","host":"Michael1","request":"/","prospector":{"type":"log"},"source":"/usr/share/apache-tomcat-8.0.28/logs/localhost_access_log..2019-06-04.txt","method":"GET","@version":"1","ts":"2019-06-04T13:33:25.559Z","bytesSent":135,"offset":2834670}]}
org.elasticsearch.index.mapper.MapperParsingException: object mapping for [host] tried to parse field [host] as object, but found a concrete value

Here is my /etc/logstash/conf.d/10-syslog-filter.conf

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

Here is my /etc/logstash/conf.d/20-filter-tomcat-logs.conf

filter {
    # access.log
    if ([source] =~ /.*\.txt$/) {
        grok {
            # Access log pattern is %a %{waffle.servlet.NegotiateSecurityFilter.PRINCIPAL}s %t %m %U%q %s %B %T "%{Referer}i" "%{User-Agent}i"
            # 10.0.0.7 - - [03/Sep/2017:10:58:19 +0000] "GET /pki/scep/pkiclient.exe?operation=GetCACaps&message= HTTP/1.1" 200 39
            match => [ "message" , "%{IPV4:clientIP} - %{NOTSPACE:user} \[%{DATA:timestamp}\] \"%{WORD:method} %{NOTSPACE:request} HTTP/1.1\" %{NUMBER:status} %{NUMBER:bytesSent}" ]
            remove_field => [ "message" ]
            add_field => { "[@metadata][cassandra_table]" => "tomcat_access" }
        }
        grok{
            match => [ "request", "/%{USERNAME:app}/" ]
            tag_on_failure => [ ]
        }
        date {
            match => [ "timestamp", "dd/MMM/YYYY:HH:mm:ss Z" ]
            remove_field => [ "timestamp" ]
        }
        ruby {
            code => "event.set('ts', event.get('@timestamp'))"
        }
        mutate {
            lowercase => [ "user" ]
            convert => [ "bytesSent", "integer", "duration", "float" ]
            update =>  { "host" => "%{[beat][hostname]}" }
            remove_field => [ "beat","type","geoip","input_type","tags" ]
        }
        if [user] == "-" {
            mutate {
                remove_field => [ "user" ]
            }
        }
        # drop unmatching message (like IPv6 requests)
        if [message] =~ /(.+)/  {
            drop { }
        }
    }
}

Here is my /etc/logstash/conf.d/30-elasticsearch-output.conf:

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    sniffing => true
    manage_template => false
    #index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"
  }
}

Does anyone have experience with this issue and resolved it? Please let me know if you have any questions.

Not sure if this was the solution but once I updated my /etc/logstash/conf.d/20-filter-tomcat-logs.conf:

from
           update => { "host" => "%{[beat][hostname]}" }

to
           rename => { "[host][name]" => "[hostname]" }

it stopped spitting out the debug messages

With ECS 1.0 I believe host is an object to conform so you might have to modify your fields to conform as well rather than moving [host][name] -> hostname ... use [host][name]

Hi @Chris_Denneen, I'm not exactly sure what you mean, would you mind explaining this please?

Host in the ECS spec is an object

so now it's

host:
  name: my hostname
  ip: 10.1.10.10

Where "host" encapsulates "host" related data.

So the correct way going forward is to use the host field as an object instead of a string?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.