Filebeat, Cisco ASA and ECS fields

Filebeat 7.7, Cisco ASA logs
ASA syslog -> logstash for filtering -> filebeat (as original raw syslog) -> cisco module/asa -> logstash -> ES
According to the recommendations from Elastic, the firewall should be the "observer" in the ECS fields, and any available information about the firewall should be in the "host" fields as well. Unfortunately, the "Host" fields get filled in with the detals of the host that is running the filebeat instance, because the syslog message passes through the host. I don't think this is correct. I can postprocess (in logstash) the message before it gets sent to the elastic host, but wouldn't it be better if the information was put in the correct fields in the first place?

I don't know about your actual question but you could have a simpler config which might resolve it.
Have Filebeat do the syslog listening and receiving, so you have ASA syslog -> filebeat (as original raw syslog) -> cisco module/asa -> logstash -> ES

Hi there, thanks for the reply. My original comment was pointing out that (as far as I can tell) the current filebeat modules are not fully ECS 1.5 compliant, and there needs to be some work clarifying the meaning and usage of terms such as "host" and "observer" within ECS.
As for our architecture - we have over 2500 network devices sending syslogs to collectors on port 514, and we do not see a valid return on investment in touching them to send logs from different types of devices to different ports. We also need to be able to "forward" the raw syslog (before the beat has had a chance to process it) to our existing SIEM solution as well as determining which of the beats "pipelines" to push them down. Logstash seems to be working ok for this at the moment, we are mixing the distributer and collector paradigms in the same logstash/filebeat platform to give us the flexibility to process different datasources differently.

I understand. I too struggle with how Beats modules will parse logs into non-ECS fields. I don't see why Elastic can't synchronize their own products to the standard field convention they maintain.

I'm basically trying the same thing as you, except I don't want Filebeat in the middle. I am trying just Cisco->Logstash->ES. So as not to reinvent the wheel, I'm trying to figure out where Filebeat's Cisco Module parsing config is, convert that for Logstash to use, customize for its ECS incompatibility, and do it that way. I'm still stuck on the Filebeat Cisco Module config into Logstash part.

Hi @mgotechlock,
If you wonder where to find the filebeat cisco module parser configs, they are at:
/usr/share/filebeat/module/cisco

The pipeline for asa and ftd, for example, is here:
/usr/share/filebeat/module/cisco/shared/ingest/asa-ftd-pipeline.yml