This is my logstash.conf file:-
input {
beats {
port => 5044
}
}
filter {
if [fields][log_type] == "syslog" {
grok {
match => { "message" => "(#!TOT:)?(320:)(%{NUMBER:st1}:)*%{NUMBER:st2} %{NUMBER:st3} %{NUMBER:st4} %{NUMBER:st5} %{NUMBER:st6} %{NUMBER:st7} %{NUMBER:st8} %{NUMBER:st9} %{NUMBER:st10} %{NUMBER:st11} %{NUMBER:st12} %{NUMBER:st13} %{NUMBER:st14} %{NUMBER:st15} %{NUMBER:st16} %{NUMBER:st17} %{NUMBER:st18} %{NUMBER:st19} %{NUMBER:st20} %{NUMBER:st21} %{NUMBER:st22} %{NUMBER:st23} %{NUMBER:st24} %{NUMBER:st25} %{NUMBER:st26} %{NUMBER:st27} %{NUMBER:st28} %{NUMBER:st29}
" }
}
}
if ("_grokparsefailure" in [tags]) { drop {} }
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
This is my filebeat.yml file:-
filebeat:
prospectors:
-
paths:
- /var/log/20450225.1805.min
# - /var/log/syslog
# - /var/log/*.log
input_type: log
fields:
log_type: syslog
registry_file: /var/lib/filebeat/registry
output:
logstash:
hosts: ["107.109.101.20:5044"]
bulk_max_size: 1024
this is a part of my 20450225.1805.min file:-
84:0:58:0:1:0:11:2610 1713 0 0 0 0 0 0 0 1833 1 :
84:0:49:6:0:0:11:0 0 0 0 0 0 0 0 0 0 0 :
84:0:53:1:1:0:11:2502 1544 0 0 0 0 0 0 0 1847 1 :
84:0:54:0:1:0:11:2606 1708 0 0 0 0 0 0 0 1826 1 :
84:0:55:0:0:0:11:2637 1739 0 0 0 0 0 0 0 1862 1 :
84:0:50:0:1:0:11:2468 1570 0 0 0 0 0 0 0 1684 1 :
84:0:51:0:0:0:11:2630 1732 0 0 0 0 0 0 0 1856 1 :
#!TOT:84:0:-1:-1:-1:-1:11:108765 88598 0 0 0 0 0 81 27 91314 27 :
84:0:4:0:0:0:11:4521 4521 0 0 0 0 0 0 0 3659 1 :
#!TOT:320:0:-1:3:-1:-1:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 16 :
320:0:4:7:0:0:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
320:0:5:6:0:0:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
320:0:4:1:0:0:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
#!TOT:320:0:-1:8:-1:-1:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
320:0:5:0:0:0:28:267 2789 179 1153 1457 0 0 2789 0 0 0 267 2789 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
320:0:4:6:0:0:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
320:0:5:5:0:0:28:0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 :
#!TOT:320:0:-1:-1:-1:-1:28:614 6571 1278 2316 2977 0 0 3022 0 634 2915 614 6571 0 114 1452 1195 75 182 0 0 0 0 1182 270 114 1452 0 :
what i want to do is look for line which starts with 320 and display the last 28 numbers on my kibana dashboard.
now when i open my kibana dash board i see an empty dashboard.Like this
Please help!!
You are dropping everything that fails to parse using grok. Have you verified that you grok expression is working properly?
thanks @Christian_Dahlqvist for fast reply
I removed that _grokparsefailure condition but even then its not showing the output:-
This is the output that i get from grok debugger:-
{
"st27": "0",
"st28": "0",
"st25": "0",
"st26": "0",
"st23": "0",
"st24": "0",
"st21": "0",
"st22": "0",
"st29": "16",
"st20": "0",
"st2": "0",
"st16": "0",
"st1": "28",
"st17": "0",
"st4": "0",
"st14": "0",
"st3": "0",
"st15": "0",
"st6": "0",
"st12": "0",
"st5": "0",
"st13": "0",
"st8": "0",
"st10": "0",
"st7": "0",
"st11": "0",
"st9": "0",
"st18": "0",
"st19": "0"
}
i.e the grok parsing is done properly .
but still not showing in kibana
Can you disable the drop filter to verify that your grok filter also parses the data correctly? When I look at your grok pattern and compare it to the sample data, it looks like there are a few :
missing...
This is my updated logstash.conf file :-
I have removed the drop filter but nothing is showing on kibana
input {
beats {
port => 5044
}
}
filter {
if [fields][log_type] == "syslog" {
grok {
match => { "message" => "(#!TOT:)?(320:)(%{NUMBER:st1}:)*%{NUMBER:st2} %{NUMBER:st3} %{NUMBER:st4} %{NUMBER:st5} %{NUMBER:st6} %{NUMBER:st7} %{NUMBER:st8} %{NUMBER:st9} %{NUMBER:st10} %{NUMBER:st11} %{NUMBER:st12} %{NUMBER:st13} %{NUMBER:st14} %{NUMBER:st15} %{NUMBER:st16} %{NUMBER:st17} %{NUMBER:st18} %{NUMBER:st19} %{NUMBER:st20} %{NUMBER:st21} %{NUMBER:st22} %{NUMBER:st23} %{NUMBER:st24} %{NUMBER:st25} %{NUMBER:st26} %{NUMBER:st27} %{NUMBER:st28} %{NUMBER:st29}
" }
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
sniffing => true
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
Do you have new data coming in? Can you add a stdout output filter to verify that data is flowing through the pipeline?
its just a static file. no new data is coming into the file
Well, if you have processed and potentially dropped all data, how to you expect anything to end up in Elasticsearch and be viewable in Kibana? You probably need to remove your Filebeat registry file and restart it to reprocess the data. You could also copy the file so it appears new to Filebeat.
by remove u mean i should delete the filebeat.registry file?
Yes, remove it and restart Filebeat. That should make the file be reprocessed.
don't know why but i removed the registry file and then restarted the ELK stack and filebeat but the contents in the registry file are same as they were 1hr before
Shut down Filebeat and then delete the registry file.
@Christian_Dahlqvist
actually filebeat is getting restarted again even after stoping it.what shoud i do?
How did you install it? Do you have a service that need to be stopped?
okay now I successfully restarted filebeat.and now i can see the message on filebeat.
but the the message is not getting splitted into required fields.
below is the screenshot of my kibana dashboard
.
You can see that you have a _grokparsefailure
tag added, which means that your grok expression is not working. As I mentioned earlier, you do seem to be missing a few:
that are separating the early fields, but that may not be the only problem. Have a look at this blog post for a guide on how to work with Logstash.
system
(system)
Closed
October 22, 2018, 11:54am
17
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.