Fail-over strategy and avoid duplication of events for multiple winlogbeat sending same events to elasticsearch using Windows Event Collector

Background (Infrastructure)
Events were collected from about thousands of machine on centralize Windows Event Collector (WEC) server. Windows Event Forwarder configure using GPO to collect events of each machine on WEC. For high availability we configured two WEC server which receives same (duplicate) events on both servers. Winlogbeat installed on single WEC server that sends data to single Elasticsearch cluster. We can not setup Winlogbeat on both server as it will lead to duplication of events on elsticsearch

Problem Statement:
Configure winlogbeat on both WEC server so to avoid duplicate event forwarding. How to handle failover if one server goes down then second server winlogbeat should trigger and send data to elasticsearch without any loss of event data

I am new to ELK stack kindly help in directing me towards best approch.
Thanks in advance

1 Like


Welcome to this forum. Although this post is 2 years old this might be a good starting point:

Basically, you have to calculate an ID for every message based on the content so when 2 messages have the same content they receive the same ID. If ElasticSearch receives a message with the same ID as another it does not add the message again but instead updates the existing document.

I hope this gives you a direction.

Best regards

I have also encountered similar issue. You can maintain two different indexes with optimizations since only one index will be primarily used and other would rarely be in use. I haven't tried this personally but if somehow you can maintain single evtx registry file (either at a common location or copy it to other server when one server fails and copy it back when it goes up). In such case you have to have a trigger mechanism to know when your WEC server goes down in order to start winlogbeat service on another.


you can also do this deduplication with winlogbeat.
Usually in logstash you use the fingerprint filter to create a hash of some fields to create an unique ID (document ID). This will be stored in Elasticsearch as the document_id which is unique.

Winlogbeat has the same feature now. Every beat has these "processors".

Here is an example for filebeat. it is the same processors:

With these you can just use winlogbeat on each server. create an id for each event and some unique fields, on both server need to be the same fields used for that. After sending the "first" to elasticsearch it will be stored. when the second (duplicated) event arrives it will check the document_id and overwrite it. you will only see one single event.

But it will also grow more in size. after rollover or after writing to the index you should do a "force merge" and the overwritten documents will be deleted completly.

I hope this helps

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.