Should filebeat's cisco module export ecs fields when outputting to non-elasticsearch

I'm new to filebeat, and I am trying to understand what data it should be exporting when sending to logstash.

I have setup filebeat to read cisco asa log files, and output to logstash. I guess I expected it to parse more then i am getting. For example, i would have expected it to break out some of the source/destination ip's in to the corresponding ECS fields.

Here is a sample output that logstash is receiving:

 {
   "@version": "1",
   "event": {
     "dataset": "cisco.asa",
     "module": "cisco",
     "timezone": "-08:00"
   },
   "agent": {
     "ephemeral_id": "5d836438-73d9-4218-bbb6-6313da4e2e5a",
     "name": "FILEBEATHOST",
     "hostname": "FILEBEATHOST",
     "type": "filebeat",
     "version": "7.9.3",
     "id": "3fd5b219-1427-46a5-abc3-b5747b3c7aec"
   },
   "log": {
     "offset": 182676367,
     "file": {
       "path": "/var/log/beats/filebeat/cisco/asa/asa.log"
     }
   },
   "input": {
     "type": "log"
   },
   "observer": {
     "mac": [
       "00:50:56:a9:eb:a1",
       "00:50:56:a9:56:c3"
     ],
     "ip": [
       "10.99.251.21",
       "fe80::250:56ff:fea9:eba1",
       "10.99.250.21",
       "fe80::250:56ff:fea9:56c3"
     ],
     "hostname": "FILEBEATHOST"
   },
   "@timestamp": "2020-11-09T17:11:28.954Z",
   "fileset": {
     "name": "asa"
   },
   "message": "Nov  9 09:11:28 NOTREALLYMYFILEWALLNAME : %ASA-6-302013: Built outbound TCP connection 1967928 for DMZ50:8.8.8.8/53 (8.8.8.8/53) to SECURITY:10.111.249.100/41846 (10.111.249.100/41846)",  
   "service": {
     "type": "cisco"
   },
   "tags": [
     "cisco-asa",
     "forwarded",
     "beats_input_codec_plain_applied"
   ],
   "host": {
     "architecture": "x86_64",
     "os": {
       "kernel": "4.15.0-122-generic",
       "name": "Ubuntu",
       "codename": "bionic",
       "family": "debian",
       "version": "18.04.4 LTS (Bionic Beaver)",
       "platform": "ubuntu"
     },
     "hostname": "FILEBEATHOST",
     "containerized": false,
     "mac": [
       "00:50:56:a9:eb:a1",
       "00:50:56:a9:56:c3"
     ],
     "ip": [
       "10.99.251.21",
       "fe80::250:56ff:fea9:eba1",
       "10.99.250.21",
       "fe80::250:56ff:fea9:56c3"
     ],
     "id": "d6c1de2b0d34454caa2f2f6c2d89cf85"
   }
 }

filebeat.yml:

 filebeat.inputs:
 - type: log
   enabled: false
   paths:
     - /tmp/testlog
   backoff: 1s
   max_backoff: 10s
   backoff_factor: 2
   close_inactive: 5m
   close_renamed: false
   close_removed: true
   close_eof: false
   clean_removed: true
 filebeat.config.modules:
   path: ${path.config}/modules.d/*.yml
   reload.enabled: false
 #output.console:
 #  enabled: false
 #  pretty: true
 output.logstash:
   enabled: true
   hosts: ["localhost:5044"]
 processors:
   - add_process_metadata:
       match_pids: [system.process.ppid]
       target: system.process.parent
   - add_host_metadata:
       netinfo.enabled: true
   - add_observer_metadata:
       netinfo.enabled: true
   - add_cloud_metadata: ~
   - add_docker_metadata: ~
   - add_kubernetes_metadata: ~
   - community_id: ~
   - add_process_metadata:
       match_pids: [system.process.ppid]
       target: system.process.parent
 logging:
     level: info
     to_syslog: true
 logging.selectors: ["*"]

Am i missing something, or is this the expected behavior?

Hello! We have a dedicated cisco module for parsing asa logs. You can use it by enabling the module first:
./filebeat modules enable cisco
and then go to modules.d/cisco.yml to change var.syslog_host and/or var.syslog_port. Using the asa fileset under cisco module will help you parse the log. For example you can see the output would look like: https://github.com/elastic/beats/blob/master/x-pack/filebeat/module/cisco/asa/test/asa.log-expected.json

@Kaiyan_Sheng, thanks for responding. I had enabled the cisco module. Here is the config i am using:

cisco.yml

- module: cisco
  asa:
    enabled: true
    var.input: file
    var.paths: ["/var/log/beats/filebeat/cisco/asa/*.log*"]
    var.log_level: 7

  ios:
    enabled: true
    var.input: file
    var.paths: ["/var/log/beats/filebeat/cisco/ios/*.log*"]
    var.log_level: 7

  nexus:
    enabled: true
    var.input: file
    var.paths: ["/var/log/beats/filebeat/cisco/nx-os/*.log*"]
    var.rsa_fields: true

Does the syslog input into filebeats have to be used to get the output enriched with the ECS fields?

Thank you for the config! I don't think syslog input is required here. What version are you running? Maybe you can check in Kibana to make sure that the pipelines for cisco is expected. For example: this is expected ingest pipeline https://github.com/elastic/beats/blob/master/x-pack/filebeat/module/cisco/shared/ingest/asa-ftd-pipeline.yml and you can use https://www.elastic.co/guide/en/elasticsearch/reference/master/get-pipeline-api.html to check what you have in ES.

@Kaiyan_Sheng,

Filebeat version 7.9

I am not using ElasticSearch or Kibana. It is a straight filebeat to logstash.

I have ran:

      filebeat setup --pipelines --modules cisco

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.