HP ProCurve switch log files not showing

Hello everyone,

I am completely new to Logstash, Kibana and Elasticsearch.
However I had no issues setting up the server and forwarding logfiles from Windows and Linux machines. What is giving me a hard time is getting log files from a HP ProCurve switch into Logstash.
I configured the switch via the "logging" command to send the logs to my Logstash server, however those are not showing up in Kibana. I tried several configuration options in the lumberjack (not needed as far as I see it) and the syslog conf-files therefore they are be a bit bloated as I got a little desperate.

Here is the configuration of the lumberjack conf file:

input { lumberjack { port => 5043 type => "logs" ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt" ssl_key => "/etc/pki/tls/private/logstash-forwarder.key" } }

input {
tcp {
codec => json_lines { charset => CP1252 }
port => "3515"
tags => [ "tcpjson" ]
}
}

input {
syslog {
port => 1514
}
}

input {
udp {
port => "514"
type => "Procurve"
}
}

filter {
date {
locale => "en"
timezone => "Etc/GMT"
match => [ "EventTime", "YYYY-MM-dd HH:mm:ss" ]
}
}
output {
elasticsearch {
host => localhost
}
stdout { codec => rubydebug }
}

----------

And here is my configuration of the syslog conf file:


input { udp { port => "514" type => "Procurve" } }
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

if [type] == "Procurve" {
if [message] =~ "last message repeated" {
grok {
match => [ "message", "<[0-9]*>%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:hostname} %{GREEDYDATA:msg-repeated}" ]
}

    } else {
            grok {
                    match => [ "message", "<[0-9]*>%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:hostname} %{DATA:switch-category}:\s+%{GREEDYDATA:switch-message}" ]
            }
    }
    if [switch-category] =~ "ports|FFI" {
            if [switch-category] =~ "ports" { mutate { add_tag => [ "layer1" ] } }
            if [switch-category] =~ "FFI"   { mutate { add_tag => [ "layer2" ] } }
            grok {
                    match => [ "switch-message", "port %{DATA:port}[- ]%{GREEDYDATA:port-message}" ]
            }
    }
    date {
            match => [ "timestamp", "MMM dd HH:mm:ss", "MMM  d HH:mm:ss" ]
            timezone => [ "America/Los_Angeles" ]
    }

}
}

----------

As said, they are bloated beyond usefulness but well ... you never know.
Logstash is running with root privileges so it should be able to listen on port 514.

Any suggestion would be helpful there.

Kind thanks in advance,
Chris

Start simple with a single network listener and a stdout { codec => rubydebug } output. If you send a message by hand with netcat, does it get through? What about the switches? Then gradually add complexity.

Thanks for your reply.
I have a stdout { codec => rubydebug } output configured, but any message I tried to send via netcat did not seem to get through.
What seems curious to me is that I can forward syslog logs from another Ubuntu machine to the Logstash server and they show up in Kibana as type "syslog".
I just can't get the ProCurve logs to get forwarded.

This has to do something with my Logstash configuration because Logstash does not seem to listen on port 514, at least that is what "lsof -nPi :514" is telling me.

edit: I got it sorted out - I used setcap to allow java to listen on privileged ports, thus 514 and now I got at least logs from one switch forwarded into logstash.