Filtering Windows Events -

I guess I have 2 questions...

  1. We want to start going through all the windows command line and powershell logs, where is the best place to start filtering out the standard traffic on our network? I.E. this is normal don't need to see it.

  2. The second is how can we pull the info out of the powershell event log that we want instead of the whole mess - i.e all we want out of the Event Data is what comes after "CommandLine=" in the xml.

...Maybe you have to do all this on Kibana, not sure.

Thanks!!! This is going to be awesome!

I would setup some filters to only index the event IDs that you are interested in. Then send all that data to Elasticsearch and start analyzing. After you know the data better you can decide if you want to drop some events on the Winlogbeat side, and you can do this with processors.

- name: Security
  ignore_older: 168h
  # Process Creation
  event_id: 4688
- name: PowerShell
  ignore_older: 168h
  event_id: 800
- name: Microsoft-Windows-PowerShell/Operational
  ignore_older: 168h
  event_id: 4104

I think you need to enable process auditing in your group policy for 4688. See Command line process auditing | Microsoft Learn

And for Powershell auditing info see More New Stuff in PowerShell V5: Extra PowerShell Auditing | Learn Powershell | Achieve More

Some of the filtering done in the blog post might be applicable to this problem. See Monitoring Windows Logons with Winlogbeat.

You can use processors to include (or drop) only the fields that you want in your events. For example:

- include_fields:
    - computer_name
    - event_id
    - log_name
    - record_number
    - source_name
    - user
    - event_data.CommandLine

Deciding what is normal can be hard.

I would probably setup some alerts or dashboards with static conditions for things like commands that use hidden windows (-w hidden) or encoded commands (-EncodedCommand).

And there is a Machine Learning feature in X-Pack that you can try out on your data. I haven't tried it on Winlogbeat data yet, but we do have an example using it with auditd logs from Linux where it looks at all process executions and finds anomalies. A good place to start is here.

Unforturnately, the powershell 'Message/Data' blob doesn't parse this out. It all appears as param3 in the Kibana dashboard...

So like from this blob of data --
<'Data'>NewCommandState=Stopped SequenceNumber=1463 HostName=ConsoleHost HostVersion=5.1.14409.1005 HostId=b99970c6-0f5f-4c76-9fb0-d5f7a8427a2a HostApplication=C:\WINDOWS\system32\WindowsPowerShell\v1.0\powershell.exe EngineVersion=5.1.14409.1005 RunspaceId=bd4224a9-ce42-43e3-b8bb-53a302c342c9 PipelineId=167 CommandName=Import-Module CommandType=Cmdlet ScriptName= CommandPath= CommandLine=Import-Module -Verbose.\nishang.psm1<'/Data'>

All we want from this is
"Import-Module -Verbose.\nishang.psm1"
from CommandLine=
into the dashboard.

Is this possible with Grok? We are trying but man it is tough.

I can't believe no one has done this ye...

That's a shame that Windows doesn't pass this info as structured data in the XML. So yes, this is possible with grok or probably even the kv filter if you want all the data. But if all you want is the CommandLine value then grok is probably the simplest.

Untested..., but it's a start:

input {
  beats {
    port => 5044
filter {
  grok {
    match => { [event_data][param3] => "CommandLine=%{GREEDYDATA:command_line}" }
output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}" 
    document_type => "%{[@metadata][type]}" 

Thanks Andrew! Is most of this stuff in the logstash book? Example, what file do the grok patterns go in etc.

I'm not familiar with the contents of the logstashbook, but that is covered in the Logstash Getting Started guide.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.