Filebeat, Cisco ASA and ECS fields

Filebeat 7.7, Cisco ASA logs
ASA syslog -> logstash for filtering -> filebeat (as original raw syslog) -> cisco module/asa -> logstash -> ES
According to the recommendations from Elastic, the firewall should be the "observer" in the ECS fields, and any available information about the firewall should be in the "host" fields as well. Unfortunately, the "Host" fields get filled in with the detals of the host that is running the filebeat instance, because the syslog message passes through the host. I don't think this is correct. I can postprocess (in logstash) the message before it gets sent to the elastic host, but wouldn't it be better if the information was put in the correct fields in the first place?

I don't know about your actual question but you could have a simpler config which might resolve it.
Have Filebeat do the syslog listening and receiving, so you have ASA syslog -> filebeat (as original raw syslog) -> cisco module/asa -> logstash -> ES

Hi there, thanks for the reply. My original comment was pointing out that (as far as I can tell) the current filebeat modules are not fully ECS 1.5 compliant, and there needs to be some work clarifying the meaning and usage of terms such as "host" and "observer" within ECS.
As for our architecture - we have over 2500 network devices sending syslogs to collectors on port 514, and we do not see a valid return on investment in touching them to send logs from different types of devices to different ports. We also need to be able to "forward" the raw syslog (before the beat has had a chance to process it) to our existing SIEM solution as well as determining which of the beats "pipelines" to push them down. Logstash seems to be working ok for this at the moment, we are mixing the distributer and collector paradigms in the same logstash/filebeat platform to give us the flexibility to process different datasources differently.

I understand. I too struggle with how Beats modules will parse logs into non-ECS fields. I don't see why Elastic can't synchronize their own products to the standard field convention they maintain.

I'm basically trying the same thing as you, except I don't want Filebeat in the middle. I am trying just Cisco->Logstash->ES. So as not to reinvent the wheel, I'm trying to figure out where Filebeat's Cisco Module parsing config is, convert that for Logstash to use, customize for its ECS incompatibility, and do it that way. I'm still stuck on the Filebeat Cisco Module config into Logstash part.

Hi @mgotechlock,
If you wonder where to find the filebeat cisco module parser configs, they are at:
/usr/share/filebeat/module/cisco

The pipeline for asa and ftd, for example, is here:
/usr/share/filebeat/module/cisco/shared/ingest/asa-ftd-pipeline.yml

So if I were to try to take the ASA pipeline at /usr/share/filebeat/module/cisco/shared/ingest/asa-ftd-pipeline.yml and try to use it in logstash, it starts like this:

description: "Pipeline for Cisco {< .internal_PREFIX >} logs"
processors:

Parse the syslog header

This populates the host.hostname, process.name, timestamp and other fields

from the header and stores the message contents in log.original.

  • grok:
    field: message
    patterns:
    - "(?:%{SYSLOG_HEADER})?\s*%{GREEDYDATA:log.original}"
    pattern_definitions:
    SYSLOG_HEADER: "(?:%{SYSLOGFACILITY}\s*)?(?:%{FTD_DATE:temp.raw_date}:?\s+)?(?:%{PROCESS_HOST}|%{HOST_PROCESS})(?:{DATA})?%{SYSLOG_END}?"
    SYSLOGFACILITY: "<%{NONNEGINT:syslog.facility:int}(?:.%{NONNEGINT:syslog.priority:int})?>"
    # Beginning with version 6.3, Firepower Threat Defense provides the option to enable timestamp as per RFC 5424.
    FTD_DATE: "(?:%{TIMESTAMP_ISO8601}|%{ASA_DATE})"
    ASA_DATE: "(?:%{DAY} )?%{MONTH} *%{MONTHDAY}(?: %{YEAR})? %{TIME}(?: %{TZ})?"
    PROCESS: "(?:[^%\s:\+)"
    SYSLOG_END: "(?:(:|\s)\s+)"
    # exactly match the syntax for firepower management logs
    PROCESS_HOST: "(?:%{PROCESS:process.name}:\s%{SYSLOGHOST:host.name})"
    HOST_PROCESS: "(?:%{SYSLOGHOST:host.hostname}:?\s+)?(?:%{PROCESS:process.name}?(?:\[%{POSINT:process.pid:long}\])?)?"

Does that text, as written, work straight in a logstash pipeline? can you clarify how to convert this to work in logstash a little more?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.