Parsing Palo Alto System Logs with Date Filter

I am trying to parse the Palo Alto System logs with logstash using the following dissect pattern inside a logstash filter:

dissect {
   mapping => { "message" => '%{?date} %{?date} %{?date} %{?network} - %{?version},%{@timestamp},%{observer.serial_number},%{type},%{sub_type},%{?FUTURE_USE},%{event.created},%{virtual_system`},%{event.id},%{object},%{?FUTURE_USE},%{?FUTURE_USE},%{module},%{event.severity},"%{description}",%{observer.sequence_number},%{action_flag},%{device_group_hierarchy.1},%{device_group_hierarchy.2},%{device_group_hierarchy.3},%{device_group_hierarchy.4},%{virtual_system_name},%{observer.hostname}' }
   }

I then need to tell Elastic that the @timestamp field is in EST since that is the time zone the logs originate in. I am trying to use the following date filter for that:

   date {
      match => [ "@timestamp", "yyyy/MM/dd HH:mm:ss" ]
      timezone => "EST"
   }

The records load into ES fine with just the dissect pattern, but after putting the date filter in the config file, the logs stop being indexed. I tried deleting the index so it will get recreated and the index shows back up but does not contain any records. Does anyone know why this would not be working?

What does logstash log when it fails to index events? The obvious candidate would be a mapping exception. Do you have a template for the index?

1 Like

Thanks for the response @Badger!

You are correct, I believe it was a mapping error. I had configured @timestamp with a date type as well as a format for the timestamp. Removing the format on that field in the index template solved the issue.

I am having another similar problem, please let me know if this should be a separate discussion post.

I need to convert the event.created field to UTC as well as @timestamp. So I am trying to run event.created through a separate date filter plugin but it is not adjusting event.created at all. It is just indexing the logs with the unadjusted timestamp for event.created.

event.created has the exact same format as @timestamp (yyyy/MM/dd HH:mm:ss) so I dont think it is a problem with the format in the match parameter of the date filter plugin since @timestamp is being adjusted properly. The logs for the pipeline are not showing any error, it seems like the field is just being ignored by the date filter plugin.

Here is the logstash filter for Palo Alto System logs:

dissect {
   mapping => { "message" => '%{?date} %{?date} %{?date} %{?network} - %{?version},%{event.created},%{observer.serial_number},%{type},%{sub_type},%{?FUTURE_USE},%{@timestamp},%{virtual_system},%{event.id},%{object},%{?FUTURE_USE},%{?FUTURE_USE},%{module},%{event.severity},"%{description}",%{observer.sequence_number},%{action_flag},%{device_group_hierarchy.1},%{device_group_hierarchy.2},%{device_group_hierarchy.3},%{device_group_hierarchy.4},%{virtual_system_name},%{observer.hostname}' }
   }
   date {
      match => [ "@timestamp", "yyyy/MM/dd HH:mm:ss" ]
      timezone => "EST5EDT"
   }
   date {
      match => [ "event.created", "yyyy/MM/dd HH:mm:ss" ]
      timezone => "EST5EDT"
   }
}

OK I figured out that if you have multiple date filter plugins then you need to have a target field?! I guess this is the case since that is the only thing I changed and its working now. This is a little misleading in my opinion since all of the parameters are listed as optional in the documentation for date filter plugins.

For future reference, here is my final filter for the pa system logs.

   dissect {
      mapping => { "message" => '%{?date} %{?date} %{?date} %{?network} - %{?version},%{event.created},%{observer.serial_number},%{type},%{sub_type},%{?FUTURE_USE},%{@timestamp},%{virtual_system},%{event.id},%{object},%{?FUTURE_USE},%{?FUTURE_USE},%{module},%{event.severity},"%{description}",%{observer.sequence_number},%{action_flag},%{device_group_hierarchy.1},%{device_group_hierarchy.2},%{device_group_hierarchy.3},%{device_group_hierarchy.4},%{virtual_system_name},%{observer.hostname}' }
   }
   date {
       match => [ "@timestamp", "yyyy/MM/dd HH:mm:ss" ]
      timezone => "EST5EDT"
   }
   date {
      match => [ "event.created", "yyyy/MM/dd HH:mm:ss" ]
      timezone => "EST5EDT"
      target => "event.created"
   }
}

Well it is optional as you can see since the first date filter does not include the option.

Having a field called [@timestamp] that needs to be parsed is actually an unusual use case (from what I have seen in this forum). Personally I would dissect into a field called something like [@metadata][timestamp] and then have the date filter parse that.

What is the reasoning behind parsing it into a metadata field instead of directly into @timestamp?

The reason not to dissect it into [@timestamp] is that there are some filters that assume that field will be a Logstash::Timestamp. If the date parsing fails then (for example) a sprintf reference for a timestamp format will fail.

I put it under [@metadata] out of habit. If your format varies because you are consuming multiple log types it might make more sense to dissect it into a field that will get indexed unless your date filter removes the field after successfully parsing it:

date {
    match => [ "someField", "someFormat" ]
    remove_field => [ "someField" ]
}

if the date filter succeeds then [someField] will get removed, but if it fails then it will be left intact so that you can see in the elasticsearch document what new date format you need to parse.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.