I'm pushing a few IIS logs to logstash and some of the "message" fields do not get parsed. I've tested the log lines in the "message" field against the pattern that's in the default.json injest and the lines were parsed correctly but Kibana's just showing the log line in one "message" field. Strangely other lines get parsed correctly and I can see all the fields broken down. Any thoughts on what maybe happening?
Are you using the IIS module in filebeat? I am asking this because all the parsing is done in an ingest pipeline installed on Elasticsearch and since you are running Logstash in the middle you might not send the events to the ingest pipeline.
Hi @pierhugues,
I have enabled the IIS module inside Filebeat. I'm assuming it's sending it through that? I have not been able to find anywhere on the server where Logstash has other patterns enabled.
My filebeat.yml has the modules configured to load and reload is set to true.
@AlexB Yes, this is why Filebeat modules use Elasticsearch ingest node to do the parsing, when you insert Logstash between the two you have to one of the following:
Manually install the filebeat ingest pipeline to elasticsearch and use conditionals and the pipeline option in the elasticsearch output to route the events.
Or just configure filebeat to directly send events to Elasticsearch, Filebeat will take care of installing the pipelines required for every modules you enable.
If you don't do any work on the events inside Logstash the last items is probably the easiest to do.
@pierhugues I'm using only Filebeat to directly send events to Elasticsearch with just the IIS module enabled. The lines it's processing are coming from the same IIS log file, all the IIS log fields are enabled. Everything is a default install.
Here's an example of a log line that fails to parse (edited to remove my server info). It parses correctly in the Grok test parser and is kept fully in the "message" field:
There are similar lines that get parsed correctly though and the fields are broken down. There's no error.message in the table view of this particular line, or other lines that failed to parse.
Not sure if this makes a difference but there are hundreds of similar lines with only the date/time and last two fields that differ. But I would have thought that because all the lines have a different timestamp they should all still be parsed correctly.
I've started Filebeat with the IIS module and had the above line in the watched log file.
When I look at the data in kibana everything is correctly extracted.
That's the weird part. I have identical lines that are parsed and some that aren't. Here are two screenshots from lines that are literally next to each other in Kibana and the filebeat config. There are no errors in the Filebeat log, just two INFO lines.
Looking at your configuration I think you have a normal prospector watching the same file as the module?
If I look at the default iss module configuration it use the same path as above.
The events from the manually defined prospector doesn't go into the ingest pipeline so the field are not extracted. So when you say some events are correct and some are not, they are in fact duplicates that doesn't go through the same flow inside filebeat.
Removing the following lines from your configuration should fix your problem.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.