Hi all,
I have searched the forum for this type of question but what I have found I did not manage to make use of it. I am a newbie in all of ELK stack. Any help would be highly appreciated.
I have the following log type:
PROCESS=process_name; DEPLOY-DIR=/home/build/stuff/; PID=12345; started at 10:00:00 and stopped at 10:01:00
[......]
INFO[10:00:01,184] - Config - Log4j property configuration chosen
[......]
INFO[10:00:02,801] - ConnectionFactory - Initializing...
[......]
One of my logstash config that I tried looks like this:
input {
beats {
port => 5044
}
}
filter {
grok {
match => { ["message" => "PROCESS=%{GREEDYDATA:job}; DEPLOY-DIR=%{GREEDYDATA:deploydir}; PID=%{BASE10NUM:pid}; started at %{TIME:process_start} and stopped at %{TIME:process_end}"] }
}
grok {
match => { ["message" => "%{LOGLEVEL:loglevel}\[.*\] - %{WORD:class} - %{GREEDYDATA:message}"] }
add_field => {[ "job_name" => "%{job}" ]}
}
if "_grokparsefailure" in [tags] {
drop { }
}
}
output {
elasticsearch {
hosts => ["server_name:9200"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
What I'am trying to achieve, is to add a new field, called job (taken from the first line of the log), to the rest of the lines from that log.
Can I achieve this? I tried multiple types of filters but I cannot seems to have job populated for every row from that log file. Thank you.