Hello everyone!
I have elastic stack 6.7 on one node
I am trying to parse logs from cisco devices. I am using below configuration logstash stops listen on all ports (for winlogbeat and filebeat).
P.S I have installed Grok.
Please help.
INPUT - Logstash listens on port 8514 for these logs.
input {
syslog {
port => "8514"
type => "syslog"
}
}
FILTER - Try to parse the cisco log format
Configuration:
clock timezone Europe +1
no clock summer-time
ntp server 0.0.0.0 prefer
ntp server 129.6.15.28
ntp server 131.107.13.100
service timestamps log datetime msec show-timezone
service timestamps debug datetime msec show-timezone
logging source-interface Loopback0
! Two logging servers for redundancy
logging host 0.0.0.0 transport tcp port 8514
logging host 0.0.0.0 transport tcp port 8514
logging trap 6
filter {
NOTE: The frontend logstash servers set the type of incoming messages.
if [type] == "syslog" and [host_group] == "Netzwerk" {
# Parse the log entry into sections. Cisco doesn't use a consistent log format, unfortunately.
grok {
patterns_dir => "/var/lib/neteye/logstash/etc/pattern.d"
match => [
# IOS
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})? %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})? %{NUMBER}:)? )?%{CISCOTIMESTAMPTZ:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
# Nexus
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} ((%{NUMBER:log_sequence#})?: )?%{NEXUSTIMESTAMP:log_date}: %%{CISCO_REASON:facility}-%{CISCO_REASON:facility_sub}-%{INT:severity_level}-%{CISCO_REASON:facility_mnemonic}: %{GREEDYDATA:message}",
# WLC
"message", "%{SYSLOGTIMESTAMP:timestamp} %{SYSLOGHOST:logsource} %{SYSLOGHOST:wlc_host}: %{DATA:wlc_action}: %{CISCOTIMESTAMP:log_date}: %{DATA:wlc_mnemonic}: %{DATA:wlc_mnemonic_message} %{GREEDYDATA:message}"
]
overwrite => [ "message" ]
add_tag => [ "cisco" ]
}
}
If we made it here, the grok was sucessful
if "cisco" in [tags] {
date {
match => [
"log_date",
# IOS
"MMM dd HH:mm:ss.SSS ZZZ",
"MMM dd HH:mm:ss ZZZ",
"MMM dd HH:mm:ss.SSS",
# Nexus
"YYYY MMM dd HH:mm:ss.SSS ZZZ",
"YYYY MMM dd HH:mm:ss ZZZ",
"YYYY MMM dd HH:mm:ss.SSS",
# Hail marry
"ISO8601"
]
}
# Add the log level's name instead of just a number.
mutate {
gsub => [
"severity_level", "0", "0 - Emergency",
"severity_level", "1", "1 - Alert",
"severity_level", "2", "2 - Critical",
"severity_level", "3", "3 - Error",
"severity_level", "4", "4 - Warning",
"severity_level", "5", "5 - Notification",
"severity_level", "6", "6 - Informational"
]
}
} # if
} # filter
output {
Something went wrong with the grok parsing, don't discard the messages though
if "_grokparsefailure" in [tags] {
file {
path => "/tmp/fail-%{type}-%{+YYYY.MM.dd}.log"
}
}
The message was parsed correctly, and should be sent to elasicsearch.
if "cisco" in [tags] {
#file {
# path => "/tmp/%{type}-%{+YYYY.MM.dd}.log"
#}
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "network-%{+YYYY.MM.dd}"
document_type => "%{type}"
document_id => "%{fingerprint}"
}
}
}