Bug in Cisco FTD Integration's Ingest Pipeline for Message ID's 302013, 302015

We have been getting quite a lot of "Network Traffic to Rare Destination Country" alerts based on the associated ML job, and after looking into each of these detections, the vast majority of them are false-positives for incoming port scans that are rightfully being blocked. In looking further into this, I have found the issue to be two-fold. I believe it starts with the messages being sent by the firewalls with the incorrect inbound/outbound direction, but then the parsing for Cisco Message IDs 302013 and 302015 with "inbound" is also incorrect.

There are three Grok patterns defined for these Message ID's:

Built (?<network.direction>outbound) %{NOTSPACE:network.transport} connection %{NUMBER:_temp_.cisco.connection_id} for %{NOTCOLON:_temp_.cisco.source_interface}:%{IPORHOST:source.address}/%{NUMBER:source.port} \(%{IPORHOST:_temp_.natsrcip}/%{NUMBER:_temp_.cisco.mapped_source_port}\)(\(%{CISCO_USER:_temp_.cisco.source_username}\))? to %{NOTCOLON:_temp_.cisco.destination_interface}:%{NOTSPACE:destination.address}/%{NUMBER:destination.port} \(%{NOTSPACE:_temp_.natdstip}/%{NUMBER:_temp_.cisco.mapped_destination_port}\)(\(%{CISCO_USER:_temp_.cisco.destination_username}\))?( \(%{CISCO_USER:_temp_.cisco.termination_user}\))?%{GREEDYDATA}

Built (?<network.direction>inbound) %{NOTSPACE:network.transport} connection %{NUMBER:_temp_.cisco.connection_id} for %{NOTCOLON:_temp_.cisco.destination_interface}:%{IPORHOST:destination.address}/%{NUMBER:destination.port} \(%{IPORHOST:_temp_.natsrcip}/%{NUMBER:_temp_.cisco.mapped_destination_port}\)(\(%{CISCO_USER:_temp_.cisco.destination_username}\))? to %{NOTCOLON:_temp_.cisco.source_interface}:%{NOTSPACE:source.address}/%{NUMBER:source.port} \(%{NOTSPACE:_temp_.natdstip}/%{NUMBER:_temp_.cisco.mapped_source_port}\)(\(%{CISCO_USER:_temp_.cisco.source_username}\))?( \(%{CISCO_USER:_temp_.cisco.termination_user}\))?%{GREEDYDATA}

Built %{NOTSPACE:network.direction} %{NOTSPACE:network.transport} connection %{NUMBER:_temp_.cisco.connection_id} for %{NOTCOLON:_temp_.cisco.source_interface}:%{IPORHOST:source.address}/%{NUMBER:source.port} \(%{IPORHOST:_temp_.natsrcip}/%{NUMBER:_temp_.cisco.mapped_source_port}\)(\(%{CISCO_USER:_temp_.cisco.source_username}\))? to %{NOTCOLON:_temp_.cisco.destination_interface}:%{NOTSPACE:destination.address}/%{NUMBER:destination.port} \(%{NOTSPACE:_temp_.natdstip}/%{NUMBER:_temp_.cisco.mapped_destination_port}\)(\(%{CISCO_USER:_temp_.cisco.destination_username}\))?( \(%{CISCO_USER:_temp_.cisco.termination_user}\))?%{GREEDYDATA}

The issue lies with the second Grok pattern in that it is incorrectly parsing the "for" interface:IP/port as the destination details when it should be the source, and the "to" interface:IP/port as the source details when it should be the destination. The _temp_.natsrcip and _temp_.natdstip are correct on all three.

The "for" section of the message is always the source, and the "to" section is always the destination. I believe this incorrect parsing may have been due to Cisco sending the messages with the incorrect inbound/outbound values, and the Elastic devs thought they were logically adjusting for this. We are not the only ones to notice this issue on the Cisco side, and being EA customers with Cisco, we are in the process of opening an ticket with them to resolve that. Based on everything I have found, I believe the third Grok pattern correctly covers all cases of Message IDs 302013 and 302015, so I feel the first two patterns should be removed.

Here are some sample (redacted) events directly from one of our FTD firewalls. The first is truly an inbound port scan of SSH, but the Grok pattern incorrectly maps the source and destination details opposite of what they should be. The second is an outbound HTTPS connection where Cisco is incorrectly saying it is inbound, but the Grok pattern correctly maps the source and destination details.

<166>Mar 22 2023 01:44:32 fw-contoso-1 : %FTD-6-302013: Built inbound TCP connection 123456789 for IF-Outside:190.97.114.166/19884 (190.97.114.166/19884) to IF-Inside:192.168.0.10/22 (8.8.8.8/22)
<166>Mar 22 2023 17:30:25 fw-contoso-1 : %FTD-6-302013: Built inbound TCP connection 234567891 for IF-Inside:192.168.0.20/61863 (8.8.8.8/16013) to IF-Outside:104.18.32.68/443(104.18.32.68/443)

Following the rule on posting new issues, I am first getting a discussion started, but this should be opened as a bug to get a correction made on the Grok patterns. I have already removed the first two patterns in our environment, and aside from Cisco incorrectly marking inbound/outbound, our connection details are now more accurate. I understand my fix will be overwritten on the next update to the Cisco FTD Integration, so hoping to get it made permanently.

Thank you,
Eric

It is better to open an Issue in the integrations repository so someone can fix it, or you can also open a PR to fix.

But you are right, the second grok is wrong, independent if Cisco is correctly marking the connection as inbound or outbound the for is the source and the to is the destination.

I do not have FTD anymore, but I have ASA that has a similar log and I use the following dissect in Logstash.

dissect {
    mapping => {
        "[cisco][log][message]" => "Built %{[network][direction]} %{[network][transport]} connection %{} for %{[source][interface][name]}:%{[source][ip]}/%{[source][port]} (%{[source][nat][ip]}/%{[source][nat][port]}) to %{[destination][interface][name]}:%{[destination][ip]}/%{[destination][port]} (%{[destination][nat][ip]}/%{[destination][nat][port]})"
    }

1 Like

Yeah, most repos say to first open a Discuss thread before creating an issue, so just "following the rules". :slight_smile:

I will open an issue and see if I can get a PR done quickly. Been hard to find the time to even blink lately!

Eric

I'm doing a couple of PR here on some issues we are getting in our company with some integrations (Cisco as well), maybe I can help with that.

Should be a pretty easy one. I'm going to whip it out real quick, but I sincerely appreciate it!

Eric

Opened issue #5647 and PR #5648 to correct this.

Eric

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.