Windows Logs to Logstash

Hi All,

First of all I'm very new to ELK etc.

I have 2 servers. One configured with the ELK stack:
Elasticsearch
Logstash
Kibana

Another server which is handling WEF (windows event forwarding).

All servers in this example are WIndows 2012 r2.

What I would like to do is is use this setup as proof of concept for building SIEM infrastructure for a project I'm currently working on.

I'm stuck at the Logstash part. From the reading the docs it appears that I have to configure Logstash to accept input from Winlogbeats.

I can't get my head around this part. I have the standard "simple" logstash.conf file per the docs and now I understand I need to add the entries for accepting (win log) beats information, but I'm not clear on the lines that I need to enter. Also according to netstat logstash isn't listening on port 5044. At point do I specify this?

I'm not finding the documentation clear on these points. My ultimate aim is to setup various beats on other servers, gather the data on Elasticsearch and use Kibana to visualise and dashboard.

Here's my config file (taken straight from the documentation)

input { stdin { } }
output {
  elasticsearch { hosts => ["[IP address]:9200"] }
  stdout { codec => rubydebug }
}

Here's the latest lines of errors from the Log:

[2018-06-22T10:58:08,310][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.0"}
[2018-06-22T10:58:08,482][INFO ][logstash.config.source.local.configpathloader] No config files found in path {:path=>"C:/ELK/logstash/bin/logstash.conf"}
[2018-06-22T10:58:08,498][ERROR][logstash.config.sourceloader] No configuration found in the configured sources.
[2018-06-22T10:58:09,170][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

I've not found a simple tutorial that explains how to set this up.

Apologies for the noob question but I am in fact a noob for Elastic!

OK, so you config is not right for the use case you are describing, but the more fundamental problem is that logstash is not able to read your configuration. How are you starting logstash, and what are the non-comment lines in logstash.yml?

Once you can get logstash to read the configuration you will probably want to replace that stdin input with

input { beats { port => 5044 } }

Hey Badger,

Since I posted I added the port reference and restarted the logstash service.

Logstash is registered as a windows service using NSSM.

I restarted it and now I have different logs. Mea culpa this probably caused my original confusion but it's still not correct.

See lines from log.

[2018-06-22T15:57:54,749][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.3.0"}
[2018-06-22T15:58:03,061][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-22T15:58:04,343][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://10.120.24.105:9200/]}}
[2018-06-22T15:58:04,374][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://10.120.24.105:9200/, :path=>"/"}
[2018-06-22T15:58:05,030][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://10.120.24.105:9200/"}
[2018-06-22T15:58:05,421][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-06-22T15:58:05,436][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2018-06-22T15:58:05,483][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-06-22T15:58:05,530][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-22T15:58:05,702][INFO ][logstash.outputs.elasticsearch] Installing elasticsearch template to _template/logstash
[2018-06-22T15:58:06,108][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//10.120.24.105:9200"]}
[2018-06-22T15:58:07,108][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-06-22T15:58:07,280][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x7894d915 run>"}
[2018-06-22T15:58:07,561][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-22T15:58:07,671][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-06-22T15:58:08,249][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}
[2018-06-22T15:58:22,282][WARN ][logstash.runner          ] SIGINT received. Shutting down.

As you can see it said it's shutdown and also if I run

netstat -an | find "5044"

I get nothing. I checked the output from netstat and this server isn't listening on 5044.

Here's the config.

Clearly I am missing something obvious.

input { 
        beats {port => 5044} 
        }
output {
  elasticsearch { hosts => ["10.120.24.105:9200"] }
  stdout { codec => rubydebug }
}

Well it was probably listening for 15 seconds, from 3:58:07 to 3:58:22, but something sent it a SIGINT, which caused it to shut down.

Are you running this on a remote server, and did you log off after starting the service?

1 Like

Hi Badger,

I think you were right. I logged on and restarted the service. It was taking a long time to load and previously it must have got caught in a reboot.

The server is now listening on port 5044. Looks like it started (according to this dump).

[2018-06-22T16:02:49,635][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"6.3.0"}
[2018-06-22T16:02:56,120][INFO ][logstash.pipeline ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>3, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2018-06-22T16:02:56,901][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>, :added=>[http://[Elastic IP]:9200/]}}
[2018-06-22T16:02:56,920][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>http://[Elastic IP]:9200/, :path=>"/"}
[2018-06-22T16:02:57,292][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>"http://[Elastic IP]:9200/"}
[2018-06-22T16:02:57,463][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2018-06-22T16:02:57,463][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the type event field won't be used to determine the document _type {:es_version=>6}
[2018-06-22T16:02:57,495][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2018-06-22T16:02:57,526][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"default"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2018-06-22T16:02:57,604][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::Elasticsearch", :hosts=>["//[Elastic IP]:9200"]}
[2018-06-22T16:02:58,432][INFO ][logstash.inputs.beats ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2018-06-22T16:02:58,542][INFO ][logstash.pipeline ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x51d35259 run>"}
[2018-06-22T16:02:58,573][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-06-22T16:02:58,635][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>}
[2018-06-22T16:02:59,042][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

I got curious and tried to putty in over 5044. And it's thrown up an exception.

[2018-06-22T16:09:17,900][WARN ][io.netty.channel.DefaultChannelPipeline] An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

(there's more lines but too much to post in this message)

And I'm not sure if I should worry? The PuTTY terminal quit with an error....

I wouldn't worry about what PuTTY does, try pointing filebeat at it.

1 Like

Ok Cool. Old habits. I'm just used to using a terminal to see if I get a response from. I'll get back once I've pointed something at it.

Hi Badger.

Success.

I've been trying to get windowslogs to start visualising inside Kibana and finally managed it.

I'll need to figure out how to only index and visualise the specific log (Forwarded Events) that I am interested in but for now it appears I have a working ELK stack and Kibana doing some very basic stuff.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.