Logstash 6.5.2 "Could not index event to Elasticsearch", Status = 400, reason failed to parse field [host] of type [text]

Hello,

After updating ELK Stack from 6.3.0 to 6.5.2 some of my pipeline ceased to work correctly. No idea what may be wrong with this one, so some help would be great.

pipeline:

# Defines inputs for logs
input {
    beats {
                port => 5102
        }
}

filter {
    grok {
    # telegraf makes timestamps in microseconds but logstash supports only miliseconds so it was necessary to cut timestamp from 19 digits to 16 digits that is why there is d{13} not d{19} in below grok regex
        match => { "message" => "(?<date>(timestamp\":)\d{13})"}
    }
    mutate {
        gsub => [ "date", "timestamp\":", "" ]
    }
    date {
        match => [ "date","UNIX_MS" ]
        target => "@timestamp"
    }
    json {
	source => "message"
    }
    if "counter" in [tags] {
        mutate {
            add_field => {
                "subname" => "%{[tags][counter]}"
            }
        }
    }
    ruby {
        code => 'event.set("tagsAsInfo", event.get("tags").to_h)'
    }
}

output {
        elasticsearch {
                hosts => "elasticsearch:9200"
                user => user
                password => password
        index => "client1-logstash-mssql-%{+YYYY.MM.dd}"
        id => "client1-mssql"
        }
}

logstash errors:

[2018-12-07T10:24:35,188][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"client1-logstash-mssql-2018.07.30", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x72fd9ca1>], :response=>{"index"=>{"_index"=>"client1-logstash-mssql-2018.07.30", "_type"=>"doc", "_id"=>"XIgyiGcBUR7CLeXYanpx", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:792"}}}}}
[2018-12-07T10:24:35,188][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"client1-logstash-mssql-2018.07.30", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3ff1da9f>], :response=>{"index"=>{"_index"=>"client1-logstash-mssql-2018.07.30", "_type"=>"doc", "_id"=>"XYgyiGcBUR7CLeXYanpx", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse field [host] of type [text]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:852"}}}}}

data from file that is parsed:

{"fields":{"size_kb":4568},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Bound Trees","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":58384},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Buffer Pool","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":32176},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"CLR","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":6696},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"DB Metadata (User Store)","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":6328},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"General","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":1760},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Lock Manager (Object Store)","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":2376},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Log Pool","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":4376},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Object Plans","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":2216},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"Schema Manager (User Store)","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":1464},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"SNI Packet (Object Store)","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":29720},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"SOS Node","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":272528},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"SQL Plans","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":2992},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"SQL Reservations","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":6640},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"SQL Storage Engine","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"size_kb":1704},"name":"sqlserver_memory_clerks","tags":{"clerk_type":"System Rowset Store","host":"ID-TEST-SERVER1","sql_instance":"ID-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
...
{"fields":{"value":0},"name":"sqlserver_performance","tags":{"counter":"Bytes Received from Replica/sec","host":"ID1-TEST-SERVER1","instance":"Total","object":"MSSQL$ID1:Availability Replica","sql_instance":"ID1-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"value":0},"name":"sqlserver_performance","tags":{"counter":"Bytes Sent to Replica/sec","host":"ID1-TEST-SERVER1","instance":"Total","object":"MSSQL$ID1:Availability Replica","sql_instance":"ID1-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"value":0},"name":"sqlserver_performance","tags":{"counter":"Bytes Sent to Transport/sec","host":"ID1-TEST-SERVER1","instance":"Total","object":"MSSQL$ID1:Availability Replica","sql_instance":"ID1-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"value":0},"name":"sqlserver_performance","tags":{"counter":"Flow Control Time (ms/sec)","host":"ID1-TEST-SERVER1","instance":"Total","object":"MSSQL$ID1:Availability Replica","sql_instance":"ID1-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}
{"fields":{"value":0},"name":"sqlserver_performance","tags":{"counter":"Flow Control/sec","host":"ID1-TEST-SERVER1","instance":"Total","object":"MSSQL$ID1:Availability Replica","sql_instance":"ID1-TEST-SERVER1:ID1"},"timestamp":1532953570000000000}

I have the same issue since I updated from 6.4.0 a few minutes ago.

Hi,

refer to the breaking changes doc here.
https://www.elastic.co/guide/en/beats/libbeat/current/breaking-changes-6.3.html

Most likely the same issue. you have a host field in the docs coming in to logstash.

//Raywon

1 Like

Thanks,

Below I post mutate for logstash that fixed the issue for me, at least for new data because old data throws host field exception.

Assuming logs contain "host' field like mine:

mutate {
    rename => ["host", "server"]
	convert => {"server" => "string"} //this may be be not necessary but just in case added it
}
2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.