Issues with multiple winlogbeat parsing

Hello,
I have attempted to add another winlogbeat agent to another box and this is the output to the logstash logs:

[2018-08-15T14:01:23,485][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.07.27", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x83d4540>], :response=>{"index"=>{"_index"=>"logstash-2018.07.27", "_type"=>"doc", "_id"=>"hp6tPWUBypXk6ixiKfIg", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:674"}}}}}
[2018-08-15T14:04:05,235][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"process_cluster_event_timeout_exception", "reason"=>"failed to process cluster event (put-mapping) within 30s"})

This is my logstash config files:

apache config:

input {
  file {
    path => "/var/log/logstash/*_log"
  }
}

filter {
  if [path] =~ "access" {
    mutate { replace => { type => "apache_access" } }
    grok {
      match => { "message" => "%{COMBINEDAPACHELOG}" }
    }
    date {
      match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
    }
  } else if [path] =~ "error" {
    mutate { replace => { type => "apache_error" } }
  } else {
    mutate { replace => { type => "random_logs" } }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

beats config:

input {
  beats {
    port => 5044
  }
}

output {
  elasticsearch {
    hosts => ["localhost:9200"]
    manage_template => false
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
  }
}

logstash config:

input { stdin { } }

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

syslog config:

input {
  tcp {
    port => 5000
    type => syslog
  }
  udp {
    port => 5000
    type => syslog
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch { hosts => ["localhost:9200"] }
  stdout { codec => rubydebug }
}

I got a few logs from the other box but never again. It came randomly and stopped randomly.

When logstash logs that error I would expect elasticsearch to log a more informative error.

[2018-08-15T14:01:23,485][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"logstash-2018.07.27", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x83d4540>], :response=>{"index"=>{"_index"=>"logstash-2018.07.27", "_type"=>"doc", "_id"=>"hp6tPWUBypXk6ixiKfIg", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [host]", "caused_by"=>{"type"=>"illegal_state_exception", "reason"=>"Can't get text on a START_OBJECT at 1:674"}}}}}
[2018-08-15T14:04:05,235][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"process_cluster_event_timeout_exception", "reason"=>"failed to process cluster event (put-mapping) within 30s"})
[2018-08-15T14:04:05,238][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"process_cluster_event_timeout_exception", "reason"=>"failed to process cluster event (put-mapping) within 30s"})
[2018-08-15T14:04:05,240][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>2}
[2018-08-15T14:04:05,551][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"process_cluster_event_timeout_exception", "reason"=>"failed to process cluster event (put-mapping) within 30s"})
[2018-08-15T14:04:05,552][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 503 ({"type"=>"process_cluster_event_timeout_exception", "reason"=>"failed to process cluster event (put-mapping) within 30s"})
[2018-08-15T14:04:05,553][INFO ][logstash.outputs.elasticsearch] Retrying individual bulk actions that failed or were rejected by the previous bulk request. {:count=>2}
[2018-08-15T14:20:29,576][INFO ][org.logstash.beats.BeatsHandler] [local: 0.0.0.0:5044, remote: IP:62810] Handling exception: Connection reset by peer

This is the new log - i checked elasticsearch and nothing wrong with it.
It seems like logs are going through, but why did it give that error and most importantly, how did it fix by itself?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.