I have created a standard user in Windows and tried to create a file in the C:\windows\system32 folder and it failed. My Auditbeat is configured to listen to the above folder. But it does not capture any events during the above failure.
Here is my audibeat (version 6.4.0) configuration file:
Is there any way, by which I can do this?. For linux enviornments, I was able to do this, by giving in audit-rules in the configuration file. Is there any similar alternative to this in windows?
The file integrity module is monitoring the file system for changes. It will tell Auditbeat when something is changed. So you won't see events for a failed access attempt.
You can still get this information from Windows and have it sent to Elasticsearch. If you enable auditing for the directories that you are interested in monitoring then Windows will write events to the Security Event log. You can use Winlogbeat to pull events from the event logs.
I tried using Winlogbeat too, but can I specify which folders to be listened in Winlogbeat?
My Winlogbeat configuration is as below:
###################### Winlogbeat Configuration Example ##########################
#======================= Winlogbeat specific options ==========================
winlogbeat.event_logs:
- name: Application
ignore_older: 72h
- name: Security
- name: System
- name: Windows PowerShell
- name: HardwareEvents
- name: Internet Explorer
- name: Key Management Service
- name: Operations Manager
- name: Windows Azure
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================
fields_under_root: true
fields:
osCategory: windows
osName: windows-server
osVersion: 2016
device: ["vum"]
beatName: "winlogbeat"
output.kafka:
# Boolean flag to enable or disable the output module.
enabled: True
# The list of Kafka broker addresses from where to fetch the cluster metadata.
# The cluster metadata contain the actual Kafka brokers events are published
# to.
hosts:
- x.x.x.x:9092
# The Kafka topic used for produced events. The setting can be a format string
# using any event field. To set the topic from document type use `%{[type]}`.
topic: 'fir_beat'
# The Kafka event partitioning strategy. Default hashing strategy is `hash`
# using the `output.kafka.key` setting or randomly distributes events if
# `output.kafka.key` is not configured.
partition:
hash:
# If enabled, events will only be published to partitions with reachable
# leaders. Default is false.
reachable_only: False
# Authentication details. Password is required if username is set.
# Kafka version beatname is assumed to run against. Defaults to the oldest
# supported stable version (currently version 0.8.2.0)
version: 0.10.0
# Metadata update configuration. Metadata do contain leader information
# deciding which broker to use when publishing.
metadata:
retry:
# Max metadata request retry attempts when cluster is in middle of leader
# election. Defaults to 3 retries.
max: 3
# Waiting time between retries during leader elections. Default is 250ms.
backoff: 250ms
# The number of concurrent load-balanced Kafka output workers.
worker: 1
# The maximum number of events to bulk in a single Kafka request. The default
# is 2048.
bulk_max_size: 2048
# The number of seconds to wait for responses from the Kafka brokers before
# timing out. The default is 30s.
timeout: 30s
# The maximum duration a broker will wait for number of required ACKs. The
# default is 10s.
broker_timeout: 10s
# The number of messages buffered for each Kafka broker. The default is 256.
channel_buffer_size: 256
# Sets the output compression codec. Must be one of none, snappy and gzip. The
# default is gzip.
compression: gzip
# The maximum permitted size of JSON-encoded messages. Bigger messages will be
# dropped. The default value is 1000000 (bytes). This value should be equal to
# or less than the broker's message.max.bytes.
max_message_bytes: 1000000
# The ACK reliability level required from broker. 0=no response, 1=wait for
# local commit, -1=wait for all replicas to commit. The default is 1. Note:
# If set to 0, no ACKs are returned by Kafka. Messages might be lost silently
# on error.
required_acks: 1
# The configurable ClientID used for logging, debugging, and auditing
# purposes. The default is "beats".
client_id: "beats"
One sample event which is not getting captured is (iam using Powershell) Change Owner - TAKEOWN /F c:\test-folder\siemtestfile123456.txt
Winlogbeat just receives the events. All of the configuration for enabling file system auditing and then telling Windows what dirs/files to monitor is handled through Windows. For example, there are a few tutorials out there: 1, 2.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.