I am a newbie Chinese ELK user,When message value is too large ,It costs too much CPU time to handle,I dont know how to fix it,but like below.
ELK5.1.1
CentOS7
JDK8
log output
INFO 16-12-22 13:34:40.998 - The user[12345] start login
INFO 16-12-22 13:34:40.998 - The user[67890] start login
INFO 16-12-22 13:34:40.998 - Output of app log huge, like 20k Here is the Problem
I want to pick the userId(12345 67890 etc) up for auditing.So I did this
logstash config
filter {
grok {
patterns_dir => "/path/to/patterns/"
match => ["message","%{USERLOGIN}"]
}
if [c] !~ "The user" {
drop{}
}
if "_grokparsefailure" in [tags] {
drop {}
}
}
patterns file
USERLOGIN %{LOGLEVEL:loglev}%{SPACE}%{DATESTAMP:date}%{SPACE}%{SPACE}-%{SPACE}%{GREEDYDATA:c}
output
{
"date" => "16-12-22 13:34:40.998",
"loglev" => "INFO",
"@timestamp" => 2016-12-23T06:25:29.605Z,
"c" => "The user[12345] start login",
"@version" => "1",
"host" => "0.0.0.0",
"type" => "login",
"tags" => []
}
But I dont know how to filter out the userId
How can I use mutate plugins to catch the userId ,output like this
{
"date" => "16-12-22 13:34:40.998",
"loglev" => "INFO",
"@timestamp" => 2016-12-23T06:25:29.605Z,
"c" => "12345",
"@version" => "1",
"host" => "0.0.0.0",
"type" => "login",
"tags" => []
}
Wish you guys can help me
Even English is not my mother language,I hope you can understand me LOL