I am running ELK with filebeat in kubernetes. Filebeat is harvesting logs and sending it to logstash. This is my logstash filter:
if [kubernetes][annotations][elastic_index] {
mutate {
add_field => { "[@metadata][es-index]" => "%{[kubernetes][annotations][elastic_index]}" }
}
} else if [kubernetes][pod][name] {
mutate {
add_field => { "[@metadata][es-index]" => "default-%{[kubernetes][pod][name]}-%{+YYYY.MM.dd}" }
}
}
}
If my pod has annotation elastic_index
, it will write into one rollover index, otherwise, it should create default index with date. I am using policies for default indices:
{
"default" : {
"version" : 9,
"modified_date" : "2020-10-20T09:50:31.399Z",
"policy" : {
"phases" : {
"hot" : {
"min_age" : "0ms",
"actions" : {
"set_priority" : {
"priority" : null
}
}
},
"delete" : {
"min_age" : "6d",
"actions" : {
"delete" : {
"delete_searchable_snapshot" : true
}
}
}
}
}
}
}
So if I am understand correctly, the index will get into hot
stage immediately after creation. Then, if index is in hot
for 6 days, it would move to delete
stage -> index should be deleted.
Now, I can see, there is app with index older than 15 days (even older than 22 days):
default-myapp-nmnj2-2020.10.08
Today is 2020.10.30
. When I check this index in Kibana
-> Index Management
section, I can see that index with timestamp 2020.10.08
has cration time 2020-10-29 13:43:19
.
Because of this, I cant make older indices readonly, because indices, e.g. myapp, has indices created 2020-10-19 but they are 22 days old.
Questions:
- Do logstash create date based on date in filebeat document or from logstash local time?
- Do I understand coreectly the index movement between stages, in this case from
hot
todelete
? - Why is there different time creation? What do I missing? How to configure it properly?
I am running everything in kubernetes, deployed via helm
Version: 7.8.1