Hello Community,
i am quite new to ELK-Stack and i ran into some problems with indexing Cisco Logfiles to make them searchable in Kibana.
Background informations:
Server A: xx.xxx.xxx.102 (Loghost)
Running syslog-ng and receiving logs from about 400 cisco devices
Path for Logfiles:
/var/log/remotelogs/switches/[switch-ip-address]/[switch-ipaddress]-[yyyy.mm.dd].log
syslog-ng creates a new logfile every day at 00:00
Using filebeat to send logfiles to Server B.
Filebeat config:
filebeat.inputs:
- type: filestream
enabled: true
id: cisco-switch
paths:
- /var/log/remotelogs/switches/*/*.log
tags: [cisco-switch]
- type: filestream
enabled: true
id: cisco-asa
paths:
- '/var/log/remotelogs/ASA/*/*.log'
tags: [cisco-asa]
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 1
output.logstash:
hosts: ["xx.xx.xx.103:5044"]
processors:
- add_host_metadata:
when.not.contains.tags: forwarded
Server B: xx.xx.xx.103 (V-Host with 8 Cores @ 2.6GHz and 32BG RAM)
running ELK Stack 8.x and receiving logs from filebeat via logstash
Logstash Config: (Input)
input {
beats {
port => 5044
}
}
Logstash Config Filter Switch:
filter {
if "cisco-switch" in [tags] and "pre-processed" not in [tags] {
mutate {
add_tag => [ "pre-processed", "Switch", "Cisco IOS/NXOS" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns/*"]
match => [
"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ1:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ2:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{TZ:logtime}: %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
"message", "%{CISCOTIMESTAMP:received} %{SYSLOGHOST:device} %{CISCOTIMESTAMPTZ2:logtime}: %{DATA:facility}: %{GREEDYDATA:log_message}",
"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %%{DATA:facility}-%{INT:severity_level}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}",
"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %{GREEDYDATA:log_message}"
]
}
syslog_pri { }
date {
match => [
"logtime",
# IOS
"MMM d YYYY HH:mm:ss.SSS ZZZ",
"MMM d YYYY HH:mm:ss ZZZ",
"MMM d YYYY HH:mm:ss.SSS",
"MMM dd YYYY HH:mm:ss.SSS ZZZ",
"MMM dd YYYY HH:mm:ss ZZZ",
"MMM dd YYYY HH:mm:ss.SSS",
# Nexus
"YYYY MMM d HH:mm:ss.SSS ZZZ",
"YYYY MMM d HH:mm:ss ZZZ",
"YYYY MMM d HH:mm:ss.SSS",
"YYYY MMM dd HH:mm:ss.SSS ZZZ",
"YYYY MMM dd HH:mm:ss ZZZ",
"YYYY MMM dd HH:mm:ss.SSS",
# Logdate without year as timestamp
"MMM dd HH:mm:ss",
"MMM d HH:mm:ss",
"ISO8601"
]
target => "@timestamp"
}
}
}
Logstash Filter ASA:
filter {
if "cisco-asa" in [tags] and "pre-processed" not in [tags] {
mutate {
add_tag => [ "pre-processed", "ASA" ]
}
grok {
patterns_dir => ["/etc/logstash/patterns/*"]
match => [
"message", "%{CISCOTIMESTAMP:logtime} %{SYSLOGHOST:device} %%{DATA:facility}-%{INT:severity_level-}-%{CISCO_REASON:mnemonic}: %{GREEDYDATA:log_message}"
]
}
syslog_pri { }
grok {
patterns_dir => ["/etc/logstash/patterns/*"]
match => [
"log_message", "%{CISCOFW106001}",
"log_message", "%{CISCOFW106006_106007_106010}",
"log_message", "%{CISCOFW106014}",
"log_message", "%{CISCOFW106015}",
"log_message", "%{CISCOFW106021}",
"log_message", "%{CISCOFW106023}",
"log_message", "%{CISCOFW106100}",
"log_message", "%{CISCOFW110002}",
"log_message", "%{CISCOFW302010}",
"log_message", "%{CISCOFW302013_302014_302015_302016}",
"log_message", "%{CISCOFW302020_302021}",
"log_message", "%{CISCOFW305011}",
"log_message", "%{CISCOFW313001_313004_313008}",
"log_message", "%{CISCOFW313005}",
"log_message", "%{CISCOFW402117}",
"log_message", "%{CISCOFW402119}",
"log_message", "%{CISCOFW419001}",
"log_message", "%{CISCOFW419002}",
"log_message", "%{CISCOFW500004}",
"log_message", "%{CISCOFW602303_602304}",
"log_message", "%{CISCOFW710001_710002_710003_710005_710006}",
"log_message", "%{CISCOFW713172}",
"log_message", "%{CISCOFW733100}",
"log_message", "%{WORD:action} %{WORD:protocol} %{CISCO_REASON:reason} from %{DATA:src_interface}:%{IP:src_ip}/%{INT:src_port} to %{DATA:dst_interface}:%{IP:dst_ip}/%{INT:dst_port}; %{GREEDYDATA:dnssec_validation}",
"log_message", "%{CISCO_ACTION:action} %{WORD:protocol} %{CISCO_REASON:reason}.*(%{IP:src_ip}).*%{IP:dst_ip} on interface %{GREEDYDATA:interface}",
"log_message", "Connection limit exceeded %{INT:inuse_connections}/%{INT:connection_limit} for input packet from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} on interface %{GREEDYDATA:interface}",
"log_message", "TCP Intercept %{DATA:threat_detection} to %{IP:ext_nat_ip}/%{INT:ext_nat_port}.*(%{IP:int_nat_ip}/%{INT:int_nat_port}).*Average rate of %{INT:syn_avg_rate} SYNs/sec exceeded the threshold of %{INT:syn_threshold}.#%{INT}",
"log_message", "Embryonic connection limit exceeded %{INT:econns}/%{INT:limit} for %{WORD:direction} packet from %{IP:src_ip}/%{INT:src_port} to %{IP:dst_ip}/%{INT:dst_port} on interface %{GREEDYDATA:interface}"
]
}
date {
match => [
"logtime",
# IOS
"MMM dd HH:mm:ss",
"MMM d HH:mm:ss",
"MMM d YYYY HH:mm:ss.SSS ZZZ",
"MMM d YYYY HH:mm:ss ZZZ",
"MMM d YYYY HH:mm:ss.SSS",
"MMM dd YYYY HH:mm:ss.SSS ZZZ",
"MMM dd YYYY HH:mm:ss ZZZ",
"MMM dd YYYY HH:mm:ss.SSS",
# Nexus
"YYYY MMM d HH:mm:ss.SSS ZZZ",
"YYYY MMM d HH:mm:ss ZZZ",
"YYYY MMM d HH:mm:ss.SSS",
"YYYY MMM dd HH:mm:ss.SSS ZZZ",
"YYYY MMM dd HH:mm:ss ZZZ",
"YYYY MMM dd HH:mm:ss.SSS",
"ISO8601"
]
target => "@timestamp"
}
}
}
Logstash Config Output:
output {
if "cisco-switch" in [tags] {
if "_grokparsefailure" in [tags] {
file {
path => "/var/log/remotelogs/grokfail/grokfail-logparse.log"
}
}
else {
elasticsearch {
hosts => ["https://localhost:9200"]
index => "cisco-switch-%{+yyyy.MM.dd}"
user => "elastic"
password => "******************"
ssl => true
ssl_certificate_verification => false
}
}
}
if "cisco-asa" in [tags] {
elasticsearch {
hosts => ["https://localhost:9200"]
index => "cisco-asa-%{+yyyy.MM.dd}"
user => "elastic"
password => "******************"
ssl => true
ssl_certificate_verification => false
}
}
}
Now there are multiple problems:
-
i have a constant datastream of about 3Mbit/sec between the Loghost and Elk-Stack
-
Server B reads logfiles multiple times from Server A and add them multiple times to the index
and they appear multiple times in the Discovery
even this line appeared only one time in the logfile.
(Remark: yes a colleague made a loop by accident) -
Daily indices (cisco-switch-yyyy.mm.dd and cisco-asa-yyyy.mm.dd) are geting very big, up to 10GB even the combined size of available logfiles on the loghot are 100-200MB only
I think mainly because logfiles are indexed multiple times -
ASA Logs are only indexed for a few hours after a new index for the day was created.
even the logfile on the loghost contains 24 hours.
Every day a few hours after 00:00 when a new logfile is created there are no more entrys indexed
I noticed that this problems began, when i changed the timstamp field to use the cisco timestamps instead of the default timestamp when logstash received the message
I also trying to figure out how to delete indices older than 90 days. Cisco logs on the loghost are deletet after that time too.
I tried it with ILM but alway got errors with the alias and have no more ideas, so any advice would be aprecciated.