2018-09-20 10:35:36 917 [DEBUG] from org.mongodb.driver.protocol.command in application-akka.actor.default-dispatcher-383 - Sending command {update : BsonString{value='accounts'}} to database dataIntelligence on connection [connectionId{localValue:7, serverValue:249520}] to server
output
{
"tags" => [
[0] "_grokparsefailure"
],
"message" => "2018-09-20 10:35:36 917 [DEBUG] from org.mongodb.driver.protocol.command in application-akka.actor.default-dispatcher-383 - Sending command {update : BsonString{value='accounts'}} to database dataIntelligence on connection [connectionId{localValue:7, serverValue:249520}] to server",
"host" => "node1",
"@timestamp" => 2018-09-20T12:51:47.217Z,
"@version" => "1"
}
Okay, but you're not including it in your grok expression. According to the expression the loglevel comes immediately after the timestamp but that's obviously not true.
A single grok filter can list multiple expressions (see the description of the match option in the grok filter documentation for details). After the more specific expressions you currently have, list a generic one that only extracts the minimum like the timestamp, the loglevel, and the message itself.
[2018-09-21T09:18:11,572][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>nil, :_index=>"filebeat-2018.09.21", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x2346414e>], :response=>{"index"=>{"_index"=>"filebeat-2018.09.21", "_type"=>"doc", "_id"=>"JpNs-2UB9EmukO5GHv4N", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [timestamp]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: \"2018-09-21 09:17:14,137\" is malformed at \" 09:17:14,137\""}}}}}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.