Anyone with working Logstash Working config for processinf Cisco Logs
thanks ... would you mind give a working config
thanks in advance
input {
udp {
port => 514 ## change me to whatever you set your ASA syslog port to
type => "syslog"
}
}
filter {
####### Cisco FW ####
if [type] == "syslog" {
grok {
match => ["message", "%{CISCO_TAGGED_SYSLOG} %{GREEDYDATA:cisco_message}"]
}
Parse the syslog severity and facility
syslog_pri { }
Extract fields from the each of the detailed message types
The patterns provided below are included in core of LogStash 1.2.0.
grok {
match => [
"cisco_message", "%{CISCOFW106001}",
"cisco_message", "%{CISCOFW106006_106007_106010}",
"cisco_message", "%{CISCOFW106014}",
"cisco_message", "%{CISCOFW106015}",
"cisco_message", "%{CISCOFW106021}",
"cisco_message", "%{CISCOFW106023}",
"cisco_message", "%{CISCOFW106100}",
"cisco_message", "%{CISCOFW110002}"
thisis my tey but still not work can you propose any error i make or give any working config ....
my router is not a asa fw it is a normal router with syslog message
magnus
can you share a working config for cisco ....
which beat should i make default, is it packetbeat or filebeat ???
input {
beats {
port => 5044
ssl => true
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
udp {
port => 514
type => "cisco-fw"
}
}
filter {
# Extract fields from the each of the detailed message types
# The patterns provided below are included in core of LogStash 1.4.2.
grok {
match => [
"message", "%{CISCOFW106001}",
"message", "%{CISCOFW106006_106007_106010}",
"message", "%{CISCOFW106014}",
"message", "%{CISCOFW106015}",
"message", "%{CISCOFW106021}",
"message", "%{CISCOFW106023}",
"message", "%{CISCOFW106100}",
"message", "%{CISCOFW110002}",
"message", "%{CISCOFW302010}",
"message", "%{CISCOFW302013_302014_302015_302016}",
"message", "%{CISCOFW302020_302021}",
"message", "%{CISCOFW305011}",
"message", "%{CISCOFW313001_313004_313008}",
"message", "%{CISCOFW313005}",
"message", "%{CISCOFW402117}",
"message", "%{CISCOFW402119}",
"message", "%{CISCOFW419001}",
"message", "%{CISCOFW419002}",
"message", "%{CISCOFW500004}",
"message", "%{CISCOFW602303_602304}",
"message", "%{CISCOFW710001_710002_710003_710005_710006}",
"message", "%{CISCOFW713172}",
"message", "%{CISCOFW733100}"
]
}
# Parse the syslog severity and facility
syslog_pri { }
Do a DNS lookup for the sending host
Otherwise host field will contain an
IP address instead of a hostname
dns {
reverse => [ "host" ]
action => "replace"
}
geoip {
source => "src_ip"
target => "geoip"
database => "/opt/logstash/GeoLiteCity.dat"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
# do GeoIP lookup for the ASN/ISP information.
geoip {
database => "/opt/logstash/GeoIPASNum.dat"
source => "src_ip"
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
....Also i can see cisco router log ciming to udp port 514 but no index etc etc ....
Hi Magnus
Can you reply to my last comment with config attach.
Thanks
thisis my tey but still not work can you propose any error i make or give any working config ....
How can we possibly help when we don't know what the messages that you get from your router looks like?
which beat should i make default, is it packetbeat or filebeat ???
Neither Packetbeat or Filebeat deals with syslog messages so I'm not sure why you'd need either.
....Also i can see cisco router log ciming to udp port 514 but no index etc etc ....
How did you verify that no indexes are created?
For now I suggest you comment out the index
and document_type
options in your elasticsearch output. Right now you'll be fine with the default settings.
there is no elasticsearch output .... logs are send direct to logstash which on the same host as elasticsearch
there is no elasticsearch output
Yes, that's what you said but how did you reach that conclusion?
Temporarily replace the elasticsearch output with a stdout { codec => rubydebug }
to remove one source of errors yet still see exactly what the resulting events look like.
like this
output {
stdout { codec => rubydebug }
}
Yes.
this is the log message coming to elk
Mar 27 21:17:01 ciscorouterip 32710588: ha-ir1: Mar 27 21:17:01: %FMANFP-6-IPACCESSLOGP: SIP0: fman_fp_image: list OUTGOING-FILTER permitted tcp sourceip(26056) Port-channel20.20-> destinationip(443), 1 packet
whats the setting for index pattern on kibana ???
also where i can see index created
whats the setting for index pattern on kibana ???
also where i can see index created
You can use Elasticsearch's "cat indices" API.
comeon mate i know how to create and delete index. What i am asking for cos all these changes it has no effect whatsoever.
FYI
i have another elk stack which is perfectly work with packetbeat shipped log from my 4 DNS resolver to that elk. So i could not find a way to make it work so that some server with logfiles using filebeat to shipped its log to that same elk, i setup this elk server so filebeat can shipped log from my smtp, pop3, radius, syslog server for cisco routers, bras, mpls PE routers etc etc ....
i have search and search for about a week or two now and you seems to just point me to that basic tutorial of creating index and deleting index which is not help at all ...
i understabd your point much like the chinese proverb. If i ask you a fish today you give me one i can eat fish for one day , but if you teach me how to fish i can eat fish all day ... to mke it short it is quite frustrating for keep asking all these ... anyway i think it is taking your time but i just ask you for some workign config thats all , but you seems not want to do that instead you just beat wrounf the bush ,,,, anyway i hope you would be more helpful ...
i have search and search for about a week or two now and you seems to just point me to that basic tutorial of creating index and deleting index which is not help at all ...
I don't know when I ever pointed you to documentation describing how to create and delete indexes. I did point you to documentation about index patterns in Kibana which is what I thought you asked for.
to mke it short it is quite frustrating for keep asking all these ... anyway i think it is taking your time but i just ask you for some workign config thats all , but you seems not want to do that instead you just beat wrounf the bush ,,,, anyway i hope you would be more helpful ...
If it were easy to give you a working configuration that's what I'd do. Since your immediate problem seems to be in getting any kind of data into ES even though you have a reasonable-looking configuration file there's nothing I can do but attempt to debug the problem, and because I don't have access to your machine asking you questions is the only thing I can do.
Debating how I choose to offer my help isn't productive for any of us. Good luck.
apologise for my frustration on trying to figure this out. i know you make time reply to my message and i should appreciate that ,,, anyway thanks for your time i will keep looking aroung the www for any thing that might solve my problem. I promise if i will success doing it i will put all the config (input,filter,and output) here so that someone in the future would benefit from. anyway sorry for my tone but i really hope this would work sooner but it seems i will keep looking hopefully another week or two as my goal ... anyway thanks for your time an effort,
Regards
Maile.