Kalyan_MB
(Kalyan Mb)
April 9, 2019, 8:09am
1
Hello,
i have below requirement to achieve can anyone help me with this?
1.Need to scan performance logs from a directory(client), and parse them to ELK stack(server)
create index patterns using lostash.conf file for the same
use this index pattern and create graphs.
once these graphs are saved, is there any way to update to the same index name when new log files are scanned and index pattern created with same name?
My log files look like below.
total used free shared buff/cache available
Mem: 128732 41030 44215 40285 43486 46630
Swap: 12287 0 12287
filebeat.yml file
filebeat.prospectors:
input_type: log
paths:
- /home/vankata/190_APS_QUALIFICATION/kalyan_elk_logs/free_data*.txt
/home/vankata/190_APS_QUALIFICATION/Audproc_cscf11/free_data*.txt
encoding: utf-8
fields_under_root: true
document_type: log
fields:
service_name: cfx
app_name: cfx_perf_logs
logstash.config
input {
beats {
port => 5044
}
}
filter {
dissect {
mapping => {'message' => '%{mem_type} %{total_mem} %{used_mem} %{free_mem} %{shared_mem} %{cache_mem} %{availablemem}'}
}
}
output {
if [service_name] == "cfx" {
elasticsearch
{
#path => "/var/log/sdl_logs/%{vnf_id}/%{vm_type}/%{instance_id}/%{service_name}/%{app_name}%{+yyyy-MM-dd-HH}.log"
#codec => line { format => "%{message}" }
#gzip => true
hosts => ["http://x.x.x.x:9200 "]
index => "perform1"
}
}
if [service_name] == "telemetry-agent" {
file{
path => "/var/log/sdl_logs/%{vnf_id}/%{vm_type}/%{instance_id}/%{service_name}/%{app_name} %{+yyyy-MM-dd-HH}.log"
codec => line { format => "%{message}" }
#gzip => true
}
}
}
BKG
April 9, 2019, 8:54am
2
Its a very big question. If I was you, I would check every step individually.
You can load filebeat from the command line with -e -d "*"
flag to show the results of harvesting the file to ensure it's configuration is working perfectly.
Then you can run Logstash from command line which will let you see the incoming logs from filebeat and whether they fields are mapped correctly and whether Logstash can communicate with elasticsearch.
Good luck!
Kalyan_MB
(Kalyan Mb)
April 9, 2019, 9:02am
3
@BKG i understand its big question!, thanks for your time, here is the output of binary runs.
2019-04-09T13:15:53.952+0530
DEBUG
[prospector]
log/prospector.go:362
Check file for harvesting: /home/vankata/190_APS_QUALIFICATION/Audproc_cscf11/free_data_11_09_03_40.txt
2019-04-09T13:15:53.952+0530
DEBUG
[prospector]
log/prospector.go:448
Update existing file for harvesting: /home/vankata/190_APS_QUALIFICATION/Audproc_cscf11/free_data_11_09_03_40.txt, offset: 204
2019-04-09T13:15:53.952+0530
DEBUG
[prospector]
log/prospector.go:502
File didn't change: /home/vankata/190_APS_QUALIFICATION/Audproc_cscf11/free_data_11_09_03_40.txt
from above what i can make out is, since file content is not changed harvesting is done ?
output {
if [service_name] == "cfx" {
elasticsearch
{
#path => "/var/log/sdl_logs/%{vnf_id}/%{vm_type}/%{instance_id}/%{service_name}/%{app_name}_%{+yyyy-MM-dd-HH}.log"
#codec => line { format => "%{message}" }
#gzip => true
hosts => ["http://x.x.x.x:9200 "]
index => "perform1"
in the logstash i am giving output to kibana? so i wont be able to see the formatted output file in ELK setup, correct me if i am wrong.
thanks in advance
BKG
April 9, 2019, 9:25am
4
Is the file getting written to often enough to test?
You could set up a script to add to the file for testing purposes. Something like:
while true; echo "my,test,csv,data,goes,here" >> /home/vankata/190_APS_QUALIFICATION/Audproc_cscf11/free_data_11_09_03_40.txt; sleep 5; done;
Kalyan_MB
(Kalyan Mb)
April 9, 2019, 9:34am
5
right now the directory where i am working has fixed log files, i will change few files contents and try..
Kalyan_MB
(Kalyan Mb)
April 9, 2019, 12:36pm
6
Hi @BKG ,
below is the first line of log input.
total used free shared buff/cache available
Mem: 128732 44366 32347 40869 46018 42704
used filter in logstash:
filter {
dissect {
mapping => {'message' => '%{mem_type} %{total_mem} %{used_mem} %{free_mem} %{shared_mem} %{cache_mem} %{availablemem}'}
}
what i see in kibana dashboard Discover tab:
is this filter index output correctly mapped or should i use other type of filter, please let me know.
Badger
April 9, 2019, 12:43pm
7
That tells dissect to expect a single space between the entries on the line. Your input has multiple spaces. dissect can handle that using -> in the field name.
dissect { mapping => {'message' => '%{mem_type->} %{total_mem->} %{used_mem->} %{free_mem->} %{shared_mem->} %{cache_mem->} %{availablemem}'} }
Kalyan_MB
(Kalyan Mb)
April 9, 2019, 12:45pm
8
Badger:
put has m
@Badger thanks, i will apply this change and try!
Kalyan_MB
(Kalyan Mb)
April 9, 2019, 2:13pm
9
keeping below as the filter:
filter {
dissect {
mapping => {'message' => '%{mem_type->} %{total_mem->} %{used_mem->} %{free_mem->} %{shared_mem->} %{cache_mem->} %{availablemem->}'}
}
}
and input file as:
total used free shared buff/cache available
Mem: 128132 42349 38321 40870 46062 42720
Swap: 12387 0 12287
what i could see in kibana after index creation is:
here for mem_type: Mem - total_mem field is missing
here for mem_type:swap - free_mem is empty
can you please comment on this?
Badger
April 9, 2019, 2:46pm
10
Kalyan_MB:
Swap: 12387 0 12287
I am really surprised you do not get a _dissectfailure for that message.
You could make the dissect conditional
if [message] =~ /Mem:/ {
dissect { mapping => {'message' => '%{mem_type->} %{total_mem->} %{used_mem->} %{free_mem->} %{shared_mem->} %{cache_mem->} %{availablemem}'} }
} else if [message] =~ /Swap:/ {
dissect { mapping => {'message' => '%{mem_type->} %{total_mem->} %{used_mem->} %{free_mem->}'} }
}
Or you could use conditional fields in a grok filter
grok { match => { "message" => "%{WORD:mem_type}:\s+%{NUMBER:total_mem:int}\s+%{NUMBER:used_mem:int}\s+%{NUMBER:free_mem:int}(\s+%{NUMBER:shared_mem:int}\s+%{NUMBER:cache_mem:int}\s+%{NUMBER:availablemem:int})?" } }
Note the use of ( )? to make the last 3 fields optional.
Kalyan_MB
(Kalyan Mb)
April 10, 2019, 7:00am
12
as i am giving output to kibana as above, will i be able to locate the index file in the ELK stack machine? if yes, where can i locate in case of .tar package installation on RHEL.
system
(system)
Closed
May 8, 2019, 7:00am
13
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.