IIS Log message field doesn't always get parsed

I'm pushing a few IIS logs to logstash and some of the "message" fields do not get parsed. I've tested the log lines in the "message" field against the pattern that's in the default.json injest and the lines were parsed correctly but Kibana's just showing the log line in one "message" field. Strangely other lines get parsed correctly and I can see all the fields broken down. Any thoughts on what maybe happening?

Thanks!

Hello @AlexB,

Are you using the IIS module in filebeat? I am asking this because all the parsing is done in an ingest pipeline installed on Elasticsearch and since you are running Logstash in the middle you might not send the events to the ingest pipeline.

Hi @pierhugues,
I have enabled the IIS module inside Filebeat. I'm assuming it's sending it through that? I have not been able to find anywhere on the server where Logstash has other patterns enabled.

My filebeat.yml has the modules configured to load and reload is set to true.

Thank you!

@AlexB Yes, this is why Filebeat modules use Elasticsearch ingest node to do the parsing, when you insert Logstash between the two you have to one of the following:

  • Convert the ingest pipeline to a Logstash pipeline
  • Manually install the filebeat ingest pipeline to elasticsearch and use conditionals and the pipeline option in the elasticsearch output to route the events.
  • Or just configure filebeat to directly send events to Elasticsearch, Filebeat will take care of installing the pipelines required for every modules you enable.

If you don't do any work on the events inside Logstash the last items is probably the easiest to do.

@pierhugues I'm using only Filebeat to directly send events to Elasticsearch with just the IIS module enabled. The lines it's processing are coming from the same IIS log file, all the IIS log fields are enabled. Everything is a default install.

Here's an example of a log line that fails to parse (edited to remove my server info). It parses correctly in the Grok test parser and is kept fully in the "message" field:

2018-08-08 16:08:09 W3SVC1 MY-SERVERNAME 10.10.74.132 POST /website/JobFarm/Controller.svc/Worker - 443 - 10.10.74.134 HTTP/1.1 - - - myserver.domain.net 200 0 0 1262 2378 234

There are similar lines that get parsed correctly though and the fields are broken down. There's no error.message in the table view of this particular line, or other lines that failed to parse.

Thanks for the help!

Not sure if this makes a difference but there are hundreds of similar lines with only the date/time and last two fields that differ. But I would have thought that because all the lines have a different timestamp they should all still be parsed correctly.

Sorry @AlexB I got confused because your said Logstash in the first comment.

I'm pushing a few IIS logs to logstash and some of the "message" fields do not get parsed. I've tested the log lines in the "message"

My apologizes, I'm new with ELK and still working on how and everything works and terminology.

The Logstash output is disabled in my filebeat.yml. Should I try pushing to Logstash instead of directly to Elasticsearch?

Thank you for trying to help!

I've started Filebeat with the IIS module and had the above line in the watched log file.
When I look at the data in kibana everything is correctly extracted.

Can you include your Filebeat configuration in this thread?
Is there any errors in Filebeat log?

That's the weird part. I have identical lines that are parsed and some that aren't. Here are two screenshots from lines that are literally next to each other in Kibana and the filebeat config. There are no errors in the Filebeat log, just two INFO lines.

filebeat.inputs:
- type: log

enabled: true
paths:
- C:\inetpub\logs\LogFiles**.log

filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false

setup.template.settings:
index.number_of_shards: 3
setup.template.name: "hostname-iis"
setup.template.pattern: "hostname-iis-*"

name: hostname-iis

dashboard.beat: "hostname-iis"
output.elasticsearch:
hosts: ["10.10.10.10:8081"]
index: "hostname-iis-%{[beat.version]}-%{+yyyy.MM.dd}"
protocol: "http"
username: "logpush"
password: "logpassword"

Thanks!

Looking at your configuration I think you have a normal prospector watching the same file as the module?
If I look at the default iss module configuration it use the same path as above.

var:
  - name: paths
    default:
      - C:/inetpub/logs/LogFiles/*/*.log
    os.darwin: [""]
    os.linux: [""]
    os.windows:
      - C:/inetpub/logs/LogFiles/*/*.log

The events from the manually defined prospector doesn't go into the ingest pipeline so the field are not extracted. So when you say some events are correct and some are not, they are in fact duplicates that doesn't go through the same flow inside filebeat.

Removing the following lines from your configuration should fix your problem.

- type: log

enabled: true
paths:
- C:\inetpub\logs\LogFiles**.log
1 Like

@pierhugues

Thank you!!! That was it! Makes perfect sense now.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.