Unable to import IIS logs into logstash

I have looked up few older articles on how to get IIS logs into logstash. For now I was going to just put them in a directory and not worry about filebeat. I got a working pattern (used grok debugger to make sure it parses the content) and created a custom conf file. When I run logstash this is all I see and nothing in ES.

[2018-04-14T19:57:51,848][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"fb_apache", :directory=>"F:/logstash/modules/fb_apache/configuration"}
[2018-04-14T19:57:51,872][INFO ][logstash.modules.scaffold] Initializing module {:module_name=>"netflow", :directory=>"F:/logstash/modules/netflow/configuration"}
[2018-04-14T19:57:52,255][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2018-04-14T19:57:52,812][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.2.2"}
[2018-04-14T19:57:54,671][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-04-14T19:58:00,998][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-04-14T19:58:01,467][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2018-04-14T19:58:01,482][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://localhost:9200/, :path=>"/"}
[2018-04-14T19:58:01,685][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://localhost:9200/"}
[2018-04-14T19:58:01,741][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>nil}
[2018-04-14T19:58:01,756][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-04-14T19:58:01,772][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-04-14T19:58:01,787][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-04-14T19:58:01,850][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost:9200"]}
[2018-04-14T19:58:03,626][INFO ][logstash.pipeline ] Pipeline started succesfully {:pipeline_id=>"main", :thread=>"#<Thread:0x5a632958 run>"}
[2018-04-14T19:58:03,733][INFO ][logstash.agent ] Pipelines running {:count=>1, :pipelines=>["main"]}
[2018-04-14T19:58:10,169][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T19:58:32,171][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T19:58:53,983][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T19:59:14,482][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T19:59:35,046][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T19:59:56,982][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T20:00:17,754][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T20:00:38,376][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T20:01:00,270][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576
[2018-04-14T20:01:22,292][WARN ][logstash.inputs.file ] Reached open files limit: 4095, set by the 'max_open_files' option or default, files yet to open: 576

[2018-04-14T20:04:22,823][WARN ][logstash.runner ] SIGINT received. Shutting down.
[2018-04-14T20:04:24,754][INFO ][logstash.pipeline ] Pipeline has terminated {:pipeline_id=>"main", :thread=>"#<Thread:0x5a632958 run>"}

This is the conf in case it is needed to debug logstash

input {
file {
type => "iis-w3c"
path => "F:/w3c_logs/W3SVC*/*.log"
}
}

filter {
if [message] =~ "^#" {
drop {}
}
grok {
match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} %{IPV4:ServerIP} %{NOTSPACE:stem} %{URIPATH:page} %{NOTSPACE:querystring} %{NUMBER:serverPort} %{DATA:username} %{IPV4:clientIP} %{NOTSPACE:referer} - %{NOTSPACE:requestHost} %{NUMBER:response} %{NUMBER:subresponse} %{NUMBER:win32response} %{NUMBER:bytesSent} %{NUMBER:bytesReceived} %{NUMBER:timetaken} %{IPORHOST:OriginalIP}"]
tag_on_failure => [ ]
}

Set the Event Timesteamp from the log

date {
match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
timezone => "Etc/UTC"
}

if [bytesSent] {
ruby {
code => "event['kilobytesSent'] = event['bytesSent'].to_i / 1024.0"
}
}

if [bytesReceived] {
ruby {
code => "event['kilobytesReceived'] = event['bytesReceived'].to_i / 1024.0"
}
}
mutate {
convert => ["bytesSent", "integer"]
convert => ["bytesReceived", "integer"]
convert => ["timetaken", "integer"]

add_field => { "clientHostname" => "%{clientIP}" }

## Finally remove the original log_timestamp field since the event will
#   have the proper date on it
#
remove_field => [ "log_timestamp"]

}
useragent {
source=> "useragent"
prefix=> "browser"
}
}

output {
elasticsearch {
hosts => ["localhost:9200"]
#
## Log records into month-based indexes
#
index => "%{type}-%{+YYYY.MM}"
}

stdout included just for testing

#stdout {codec => rubydebug}
}

Any ideas what the issue might be?

If you want Logstash to read the input files from the beginning you need to set the file input's start_position parameter. You should study the file input docs thoroughly, especially what's said about sincedb. Please also consult previous threads on this topic as your question is an extremely common one.

Thanks @magnusbaeck that was only thing that was missing from the configuration. None of the searches returned me that info so I wasn't sure what exactly to do till you mentioned start_position in conf file.

Appreciate your help

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.