Hello, first of all I don't really know if this should go into beats
or logstash
so I have added it here for now.
We use logstash for the majority of our logs, mostly because we use a lot of log types that are not popular and the default mappings with elasticsearch won't map parameters we want to alert on (must stuff just sits in "message: "). Which means in the process syslog, auth logs etc get processed via logstash as well (I know elasticsearch can handle this easily with filebeat default mappings).
My question is this, we want to use SIEM, but all of the mappings (Logstash Grok) don't populate the logs properly as they are not using the same fields as your ECS standards. What is the best away around this?
I had some ideas (They may not even be possible)
Can we some how parse syslog directly via filebeat > elasticsearch, skipping logstash entirely but all other aggregators go via logstash for example.
/var/log/syslog > filebeat > output elasticsearch
/var/log/application/console.log > output logstash.
Or are we left with updating our groks for syslog to match what auditbeat would populate so our SIEM dashboards populate correctly.
We essentially have an index per log type
syslog = system-logs-*
nginx = nginx-*
auth = auth-*
And we split them out via logstash with different groks.
Hopefully this makes some sense, if not feel free to ask for more information.