I'm trying to use the panw module receiving data via a syslog port. I've enabled the module and can see that filebeat is listening on the proper udp port. I can see traffic arrive in tcpdump and the events look valid.
I enabled debug logging in filebeat and I don't see anything that looks like an event arriving, so I don't know what else to check. Shouldn't a debug log show something when the input is received?
Hi @rugenl
Please Share Config and Logs
Do you see filebeat reporting that is is listening on UDP port?
You can also set the output to console to just eliminate variables.
It would be nice if it included a port number but netstat shows that the correct port is in use by filebeat.
This filebeat is using other modules, so I can't redirect the output to console, but I switched to a simple -type: syslog, without the panw module and it will index the "raw" syslog events.
The debug log doesn't show anything about the event until it publishes it. Is there a way to turn on debugging in the module? The doc shows how to enable logging in the publisher section, but doesn't seem to list any other sections. I'm just setting logging.level: debug in filebeat.yml.
Yes, I did the setup and verified the ingest pipelines are loaded. I think that is all setup does, the datastreams are setup as verified by the -type: syslog test. I'm about 90% sure filebeat isn't trying to send the events to Elastic.
They are supposedly on PANOS 10.1, I can't find any comments in the module about version support.
This module has been tested with logs generated by devices running PAN-OS versions 7.1 to 9.0 but limited compatibility is expected for earlier versions.
The ingest-geoip Elasticsearch plugin is required to run this module.
I'm hoping that is just an outdated doc reference PAN-OS 9.0 went out of support March 1, 2022. I'm willing to work on "fixing" the module for the new data format if that is needed, I just can't get to a point where I can see what is failing.
I'm just the syslog catcher on this, I don't know anything about PAN-OS that I haven't Googled today
Our prior ingest was done in logstash and was setup before modules and ingest pipelines were available. It was doing a csv split on the data, sending it to Elasticsearch, but was failing because some field that was mapped as an IP address was now text. The index failure was a clear message in the logstash log.
Thanks. stay tuned, I'm hoping to get back to this after chumming the daily sharks....
Yeah we just need the error... Yes if the ingest pipeline or actuall write is failing / typically you should be able to see that in the filebeat logs.
Once we get that we should be able to figure out.
I enabled the module and set -d "*" and then just netcatted a message to the port and saw it send through to elastic I just sent "Hello World" and the pipeline did not fail etc...
Some reason thinking filebeat is not reading from the UDP or there is some other disconnect.
Well, this is embarrassing.... I'm amazed that I can read this:
var.syslog_host
The interface to listen to UDP based syslog traffic. Defaults to localhost. Set to 0.0.0.0 to bind to all available interfaces.
As "Defaults to '0.0.0.0"... I was only listening on localhost, it's working now. It looks like some "types" are still failing, I know I'm not seeing "Threat" events, probably the same mapping error.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.