Im using Logstash 7.2.0 that is being fed logs through filebeats. I have done a decent amount of work with parsing some of the logs coming in, but seem to keep running into this recuring issue.
Sending a log like this: 10.0.0.248 - - [22/Oct/2019:16:14:22 +0100] 005_D34DB33F "POST /uri/uri/Wibble HTTP/1.1" 200 614994 1240 "-" "Another string here 10h"
@Badger Elasticsearch is logging so many errors from other logs that are failing to parse (unrelated) that it impossible to locate the logs for this particular rule. Im super inclined to think that this is failing when being added to ElasticSearch also, but cant quite put my finger on why.
As an update, If I change the name of the %{logDateTime} field to anything else that doesnt already exist in another rule, then It parses the second rule just fine!
It could be getting 400 errors for a mapping exception. That is, the type of logDateTime has already been set by another rule. Maybe one is a LogStash::Timestamp and the other is a string? Can you grep for logDateTime in the elasticsearch log?
So I have already been looking at the mappings, and it looks like that is indeed the problem. logDateTime is type : date, and the value I am trying to add is not a propper date as far as Elasticsearch is concerned.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.