Logtssh Grok for Cisco ISE

I have this Filter and its not parsing correctly ,

filter {
         match => { "message" => "<%{INT:syslog_pri}>%{SYSLOGTIMESTAMP:syslog_ts} %{HOSTNAME:host} %{WORD:module} %{INT:session_id} %{INT:severity} %{INT:msg_id} %{GREEDYDATA:restOfLine}" }
     }
        kv {
           source => "restOfLine"
           field_split => ","
           trim_key => "\\"
           value_split => "="
           }
    mutate {
       remove_field => [ "message", "restOfLine" ]
       add_field => { "devicevendor" => "cisco" }
       add_field => { "deviceproduct" => "ise" }
       }


}

<182>Sep 18 13:46:04 ise03 CISE_Profiler 0000000251 1 0 2025-09-18 13:46:04.744 +00:00 0179839630 80002 INFO Profiler: Profiler EndPoint profiling event occurred, ConfigVersionId=159, OperatingSystem=Windows 11 Enterprise, EndpointCertainityMetric=5, EndpointIPAddress=0.0.0.0

As written, that will fail to compile for two reasons. Firstly, it is missing the “grok {“ for the first filter. Secondly, you cannot escape a backslash at the end of a string. This is discussed at length in this github issue.

However, I see no reason to try to trim backslash from the keys, you probably want to trim spaces instead, in which case it should work if you try

    kv {
        source => "restOfLine"
        field_split => ","
        trim_key => " "
        value_split => "="
        remove_field => [ "restOfLine" ]
    }

I missed that but still nothing ,


<182>Sep 18 14:29:13 ise03 CISE_Profiler 0000000255 1 0 2025-09-18 14:29:13.635 +00:00 0180184437 80002 INFO  Profiler: Profiler EndPoint profiling event occurred, ConfigVersionId=159, OperatingSystem=Apple iOS


filter {
         grok {
         match => { "message" => "<%{INT:priority}>%{SYSLOGTIMESTAMP:syslog_ts} %{HOSTNAME:hostname} %{WORD:program} %{INT:session_id} %{INT:flag1} %{INT:flag2} %{TIMESTAMP_ISO8601:timestamp} %{INT:offset}:%{INT:code} %{INT:subcode} %{INT:subcode2} %{LOGLEVEL:log_level}  %{WORD:Profiler}: %{DATA:Profiler1}, %{GREEDYDATA:restOfLine}" }
     }
        kv {
           source => "restOfLine"
           field_split => ","
           trim_key => " "
           value_split => "="
           }
    mutate {
       remove_field => [ "message", "restOfLine" ]
       add_field => { "devicevendor" => "cisco" }
       add_field => { "deviceproduct" => "ise" }
       }


}

So what is happening? Does logstash successfully start the pipeline? Does it log any errors?

What do you get on stdout if you add

output { stdout { codec => rubydebug } }

I get a grok error

message <179>Sep 18 18:38:14 ise CISE_Profiler 0000000263 1 0 2025-09-18 18:38:14.661 +00:00 0182114190 80009 ERROR Profiler: Profiler SNMP request failure, ConfigVersionId=159, EndpointNADAddress=1.1.1.1, ProfilerServer=ise.com, SNMPOIDValue=ifDescr, ProfilerErrorMessage=Request timed out.,
tags[tcp/514, _grokparsefailure]

That’s because your grok pattern has two spaces following the log level, which may work for INFO, but not for ERROR. Try editing the pattern to use

%{LOGLEVEL:log_level}\s+%{WORD:Profiler}:
1 Like

this seems to work! , \s will account for any spaces ?

\s will match exactly one whitespace character. \s+ will match one or more, \s* will match zero or more.

1 Like

@Elk_huh just a side question, why not use the integration that comes with Elastic, even if you don’t use the agent to collect, you could use the pipeline it provides which should have a great starting point for your parsing? Cisco ISE | Elastic integrations

Hi Justin!,

You know that would be amazing as we are trying to move away from Logstash

Elastic Agent - Cisco ISE integration doesnt parse Profiler logs from ISE, so if you can get this on the roadmap this would be great!

You could open an issue in the Integreations repository so someone can take a look into it.

It is this one: GitHub - elastic/integrations

1 Like

I see you have created the Request already,

The dev Team is now aware. Was also wondering if you had tried creating your own integration through the Automatic Import feature: Automatic import | Elastic Docs

might be quicker than trying to build your own integration using the custom Filestream option:filestream input | Beats