Aggregated Logs and Reformatting for QRADAR

Hi all,

I'm currently researching logstash (along with filebeat) as a possible solution for a problem we're seeing.

We currently have a log server that is acting as a central manager, with a bunch of different Linux and windows servers sending their logs to this log server where Wazuh aggregates them before Rsyslog sends them to a remote QRadar server.

The problem that we're seeing is that Wazuh is appending it's own server details to the front of the log and therefore QRadar is seeing everything as coming from the Log Server not the original host. I've been told that Logstash may help me resolve this problem.

I've manage to configure Filebeat and Logstash to work together in interpreting an extract of one of these logs and at the moment, it's spitting it out into another logfile (I just need to work out how to format it).

So my questions are: As I'm pulling from a log file, do I need to set filebeat as type filestream?

Do I need to use Grok in my pipeline.conf file to configure it and if so, can anyone point me to some good tutorials to understand this?

Thank you everyone in advance,

Ombit

Logstash every received line see as a message and don't change. Filebeat adds additional fields which you can keep or remove, up to you, but originally message will not be changed.

Yes you need to set as filestream if is a plain text formatted. Read the documentation.

Depend on log format you can use Grok, CSV or dissect plugins. If you choose Grok, you can find plenty YT videos and https://grokdebug.herokuapp.com/ to parse the log fields.
Do not touch pipeline.conf just create log.conf or a similar name in /etc/logstasg/conf/conf.d

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.