Hi,
I recently added a filter to my logstash to change indexing for spesific indices(weekly, monthly etc.)
The filter is this:
filter {
if [retention] == "weekly" {
mutate { add_field => { "[@metadata][target_index]" => "%{program}-%{+xxxx.ww}" } }
} else if [retention] == "monthly" {
mutate { add_field => { "[@metadata][target_index]" => "%{program}-%{+YYYY.MM}" } }
} else {
mutate { add_field => { "[@metadata][target_index]" => "%{program}-%{+YYYY.MM.dd}" } }
}
}
After updating logstash i noticed the logs i receive have every bit of information crammed in message field like this:
{"@timestamp":"2021-02-22T14:20:01+00:00","source":"x","message":"xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx","level":x,"thread_id":xxxxx,"thread_name":"logger","retention":"weekly","service":"xxx","program":"xxxxxxxxx"}
So I also added this right before the new filter:
filter {
if [message] =~ "\A\{.+\}\z" {
json { source => "message" }
}
}
after this the logs started appearing in desired format. Now I administrate more than 90 indices and every index except one is working just fine.
The problem with the spesific index is that it has a data field with always changing variables in it. And each time a log from a different source comes the filter creates that entire data field. On kibana i see avaliable fields like this:
Index now has thousands of these fields and doesnt receive new logs anymore.
The logger options are like this:
$syslog_message = json_encode(['@timestamp'=>date('c'),'source'=>gethostname(),'message'=>$message,'level'=>$level,'thread_id'=>'xxxxxxx-'.uniqid(),'thread_name'=>'logger','program'=>'xxxxxxx']);
$message = [
"path" =>$req->getUri()->getPath(),
"data" => $req->getParsedBody()
];
Is there a way to fix this? Either by changing the logstash filter or changing the logger?
Thanks in advance.