Using filebeat syslog input for PANW. Logs not publishing to Kibana

So I am trying to get the PANW module for filebeats running. Currently its configured to receive palo alto logs via UDP. When I set filebeat to run in debug I am seeing logs being parsed but yet its not showing up in kibana.

here is a bit of filebeat.yml file
` setup.kibana:
host: "https://kibanahost:5601"
protocol: "https"
username: "filebeat"
password: "{password}"
ssl.verification_mode: "none"

output.elasticsearch:
hosts: ["https://elastichost:9200"]
protocol: "https"
username: "elastic"
password: "{Password}"
ssl.verification_mode: none`

and panw.yml

  • module: panw
    panos:
    enabled: true
    var.input: "syslog"
    var.syslog_host: 0.0.0.0
    var.syslog_port: 10514

ran filebeat in debug and can see publish events yet nothing showing up

DEBUG [elasticsearch] elasticsearch/client.go:347 PublishEvents: 1 events have been published to elasticsearch in 3.813019ms.

Any help would be appreciated.

Not sure your panw msgs are published, that 1 event could be from beat monitoring. Things that could help.

  • set ip of host in syslog_host
  • Open port 10514 in firewall

Mvg

So IP is set and made sure that the port is open on the FW and still not receiving data. For added context here is a dump in debug mode.

2019-10-22T19:53:36.866Z DEBUG [processors] processing/processors.go:183 Publish event: {
"@timestamp": "2019-10-22T12:53:36.000Z",
"@metadata": {
"beat": "filebeat",
"type": "_doc",
"version": "7.3.2",
"truncated": false,
"pipeline": "filebeat-7.3.2-panw-panos-pipeline"
},
"host": {
"name": "rsyslog-FB",
"os": {
"platform": "ubuntu",
"version": "18.04.3 LTS (Bionic Beaver)",
"family": "debian",
"name": "Ubuntu",
"kernel": "5.0.0-1018-azure",
"codename": "bionic"
},
"id": "92f1bfed803043dfafa56fe8c2f3338b",
"containerized": false,
"hostname": "rsyslog-FB",
"architecture": "x86_64"
},
"hostname": "{PaloAltoHostname}",
"message": "1,2019/10/22 12:53:36,012801059187,SYSTEM,dhcp,0,2019/10/22 12:53:36,,lease-start,,0,0,general,informational,"DHCP lease started ip {*} --> mac 8c:85:90:6d:f8:fd - hostname Ladys-MBP-2, interface ethernet1/2",1363611,0x8000000000000000,0,0,0,0,,{PaloAltoHostname}",
"log": {
"source": {
"address": "{IP}"
}
},
"tags": [
"pan-os"
],
"cloud": {
"machine": {
"type": "Standard_D2s_v3"
},
"region": "westus",
"provider": "az",
"instance": {
"name": "rsyslog-FB",
"id": "0b4dd33e-eb7a-44d7-a614-a89ffe23e608"
}
},
"syslog": {
"facility": 1,
"facility_label": "user-level",
"priority": 14,
"severity_label": "Informational"
},
"event": {
"severity": 6,
"module": "panw",
"dataset": "panw.panos",
"timezone": "+00:00",
"created": "2019/10/22 12:53:36"
},
"service": {
"type": "panw"
},
"observer": {
"serial_number": "012801059187"
},
"agent": {
"hostname": "rsyslog-FB",
"id": "64eb4efc-6e00-490b-9ef1-86e41a9f2159",
"version": "7.3.2",
"type": "filebeat",
"ephemeral_id": "978e8318-0e31-4b13-93c8-a0f2adbe8e6e"
},
"input": {
"type": "syslog"
},
"fileset": {
"name": "panos"
},
"ecs": {
"version": "1.0.1"
},
"temp": {
"message_type": "SYSTEM",
"message_subtype": "dhcp",
"generated_time": "2019/10/22 12:53:36"
}
}
2019-10-22T19:53:37.869Z DEBUG [elasticsearch] elasticsearch/client.go:347 PublishEvents: 1 events have been published to elasticsearch in 2.464703ms.
2019-10-22T19:53:37.869Z DEBUG [publisher] memqueue/ackloop.go:160 ackloop: receive ack [8: 0, 1]
2019-10-22T19:53:37.869Z DEBUG [publisher] memqueue/eventloop.go:535 broker ACK events: count=1, start-seq=9, end-seq=9

2019-10-22T19:53:37.870Z DEBUG [publisher] memqueue/ackloop.go:128 ackloop: return ack to broker loop:1
2019-10-22T19:53:37.870Z DEBUG [publisher] memqueue/ackloop.go:131 ackloop: done send ack
2019-10-22T19:53:37.870Z DEBUG [acker] beater/acker.go:69 stateless ack {"count": 1}

Some markup would be nice.. Ok, the debug log indeed shows your Palo alto log being published. So you are authenticating with elastic user.. Have you managed to index any other type of Logs with Filebeat yet? Are you indexing to ingest node? Do you see any errors in the Elastic cluster logs?
Do you have the correct template and pipeline?
Hope these questions get you going.. :wink:

So was able to figure out the issue. Turns out admin was sending diagnostic logs and not traffic and alert logs.
Issue solved.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.