Ingesting syslog from NetApp ONTAP

I've configured our storage to send syslog to filebeat but when I examine what's been ingested in kibana - the whole syslog message is crammed in one "message" field while the rest of the 26 fields have values relating to the server on which filebeat is running.

Clearly, I am a newbie, how should I be looking to ingest these netapp syslogs: netapp -> filebeat -> elasticsearch like I've configured so far or is there a better/right way to do this?

So far on the storage side I've selected rfc-5424:

Type of Destination: syslog
Destination: 10.6.11.104
Syslog Port: 9000
Syslog Transport: tcp-unencrypted
Syslog Message Format: rfc-5424

and this is the input config in filebeat:

filebeat.inputs:
- type: syslog
  enabled: true
  format: rfc5424
  protocol.tcp:
    host: "10.6.11.104:9000"

all elk components including filebeat are running on pnl0000vspr4306 [10.6.11.104]
this is an example doc I see ingested in Analytics/discover

{
  "@timestamp": [
    "2023-03-21T14:05:08.675Z"
  ],
  "agent.ephemeral_id": [
    "bfeaccc6-e4a2-45f9-96f2-eec2bcfc4031"
  ],
  "agent.hostname": [
    "pnl0000vspr4306"
  ],
  "agent.id": [
    "098e76de-a2ba-496e-8596-cd0a4974d862"
  ],
  "agent.name": [
    "pnl0000vspr4306"
  ],
  "agent.type": [
    "filebeat"
  ],
  "agent.version": [
    "8.6.2"
  ],
  "ecs.version": [
    "8.0.0"
  ],
  "host.architecture": [
    "x86_64"
  ],
  "host.containerized": [
    false
  ],
  "host.hostname": [
    "pnl0000vspr4306"
  ],
  "host.id": [
    "d58181bc7055458dbb8646bbabe17714"
  ],
  "host.ip": [
    "10.6.11.104"
  ],
  "host.mac": [
    "00-50-56-B9-D0-38"
  ],
  "host.name": [
    "pnl0000vspr4306"
  ],
  "host.os.family": [
    ""
  ],
  "host.os.kernel": [
    "4.18.0-425.13.1.el8_7.x86_64"
  ],
  "host.os.name": [
    "Rocky Linux"
  ],
  "host.os.name.text": [
    "Rocky Linux"
  ],
  "host.os.platform": [
    "rocky"
  ],
  "host.os.type": [
    "linux"
  ],
  "host.os.version": [
    "8.7 (Green Obsidian)"
  ],
  "input.type": [
    "syslog"
  ],
  "log.source.address": [
    "10.6.190.21:11768"
  ],
  "message": [
    "<14>Mar 21 14:05:09 pnl0003scpr1621: pnl0003scpr1621: 00000017.00066861 0082971a Tue Mar 21 2023 14:05:08 +00:00 [kern_audit:info:3039] 8003e80000035585:8003e80000035586 :: pnl0003scpr1620:ssh :: 10.199.132.135:61381 :: pnl0003scpr1620:our_ad_domain\my_admin_account :: Logging in :: Success "
  ],
  "_id": "hVp8BIcBsjyao5wS8PGs",
  "_index": ".ds-filebeat-8.6.2-2023.03.21-000001",
  "_score": null
}

Hi @diselkgd7 Welcome to the community and a great start!

I am not a syslog expert but that message does not really look like it is RFC-5424 but that ok (your message fails validation)

Now you just need to parse it a bit...

Create an ingest pipeline
Dev Tools

I started from a sstubb I had

PUT _ingest/pipeline/my-syslog-pipeline
{
  "description" : "syslog",
    "processors": [
      {
        "set": {
          "field": "event.ingested",
          "value": "{{_ingest.timestamp}}"
        }
      },
      {
        "grok": {
          "ignore_missing": true,
          "field": "message",
          "patterns": [
            """%{SYSLOG5424PRI}%{SYSLOGTIMESTAMP:system.syslog.timestamp} %{GREEDYDATA:system.syslog.message}"""
          ],
          "ecs_compatibility" : "v1"
          
        }
      },
      {
        "remove": {
          "field": "message"
        }
      },
      {
        "rename": {
          "target_field": "message",
          "ignore_missing": true,
          "field": "system.syslog.message"
        }
      },
      {
        "date": {
          "on_failure": [
            {
              "append": {
                "field": "error.message",
                "value": "{{ _ingest.on_failure_message }}"
              }
            }
          ],
          "if": "ctx.event.timezone == null",
          "field": "system.syslog.timestamp",
          "target_field": "@timestamp",
          "formats": [
            "MMM  d HH:mm:ss",
            "MMM dd HH:mm:ss",
            "MMM d HH:mm:ss",
            "ISO8601"
          ]
        }
      },
      {
        "date": {
          "field": "system.syslog.timestamp",
          "target_field": "@timestamp",
          "formats": [
            "MMM  d HH:mm:ss",
            "MMM dd HH:mm:ss",
            "MMM d HH:mm:ss",
            "ISO8601"
          ],
          "timezone": "{{ event.timezone }}",
          "on_failure": [
            {
              "append": {
                "field": "error.message",
                "value": "{{ _ingest.on_failure_message }}"
              }
            }
          ],
          "if": "ctx.event.timezone != null"
        }
      },
      {
        "remove": {
          "field": "system"
        }
      },
      {
        "set": {
          "field": "event.kind",
          "value": "event"
        }
      }
    ],
    "on_failure": [
      {
        "set": {
          "field": "error.message",
          "value": "{{ _ingest.on_failure_message }}"
        }
      }
    ]
}

Now Set

filebeat.inputs:
- type: syslog
  enabled: true
  format: rfc5424
  protocol.tcp:
    host: "10.6.11.104:9000"
  pipeline: my-syslog-pipeline

And you should get something that look like

{
  "docs": [
    {
      "doc": {
        "_index": "_index",
        "_id": "_id",
        "_version": "-3",
        "_source": {
          "@timestamp": "2023-03-21T14:05:09.000Z",
          "event": {
            "ingested": "2023-03-22T00:56:03.655083947Z",
            "kind": "event"
          },
          "message": """pnl0003scpr1621: pnl0003scpr1621: 00000017.00066861 0082971a Tue Mar 21 2023 14:05:08 +00:00 [kern_audit:info:3039] 8003e80000035585:8003e80000035586 :: pnl0003scpr1620:ssh :: 10.199.132.135:61381 :: pnl0003scpr1620:our_ad_domain\my_admin_account :: Logging in :: Success """,
          "log": {
            "syslog": {
              "priority": 14
            }
          }
        },
        "_ingest": {
          "timestamp": "2023-03-22T00:56:03.655083947Z"
        }
      }
    }
  ]
}

You can read more about ingest, grok, ecs fields etc

There is a grok debugger in Kibana but lots of like this incremental one... but you probably want to be aware of ECS