Filebeat F5 AFM Module Log Format

We're currently trying to get the bigipafm fileset in the F5 module to parse the logs that are incoming from the F5 appliance. The documentation is missing the required log format, as well as the F5 AFM versions that are supported.

There is an example log in https://github.com/elastic/beats/blob/master/x-pack/filebeat/module/f5/bigipafm/test/generated.log but it does not match what we're getting from the appliance. A redacted sample string from our input:
<13>Dec 14 15:10:20 afm-h14lb-8 afmlog /Common/vlan124 75.189.17.66:49268 EN/Norfolk via /Common/vlan10-ACME-dmz-IN --> 195.50.81.18:443 TCP Accept Rule auth.ACME.test

Does someone have experience regarding the settings that are required on the F5 side to make the module's parsing script work?

Generally speaking, Filebeat filesets only supports default formats from the modules. Is it possible that your F5 is outputting non-default information and that's why Filebeat is not parsing it correctly?

Can you provide more information? Can you paste the output error here, please? :slightly_smiling_face:

It's very likely that the output isn't standard, you're right. I understand that the error stems from the mismatch in the log formats. I'm looking for the format that the F5 admin needs to set so the output complies to what's expected by the module.

I can't extract the error right now, but it fails at the pipeline.js processor, because the format coming in differs from what's expected as we established before.

var hdr1 = match("HEADER#0:0001", "message", "%{hfld1->} %{hfld2->} %{hhostname->} %{hfld3->} %{hfld4->} %{hfld5->} [F5@%{hfld6->} %{payload}", processor_chain([
    	setc("header_id","0001"),
    	setc("messageid","BIGIP_AFM"),
    ]));

You can add the Dissect processor in the Ingest Node Pipeline - filebeat-7.10.0-f5-bigipafm-pipeline

If you have the exact log, you can add the Dissect in the last line of the processor.
The sample as follow:

  • Fill Field with value event.original
  • Fill the Pattern with %{?month} %{?date} %{?time} %{observer.name} %{observer.apps} %{source.profile} %{source.ip}:%{source.port} %{source.region} %{?via} %{source.gateway.profile} %{?} %{destination.ip}:%{destination.port} %{network.type} %{event.action} %{?} %{firewall.rule->}
  • Choose Ignore missing
  • Fill Condition (optional) dengan ctx?.event?.original.contains('-->')
  • Update the Processor
  • Save pipeline
1 Like

Thanks for the suggestion, I'll try that as soon as we get the connection to Elasticsearch up and running - which will take a while. I will update you once I've tried it out.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.