Try using index lifecycle management, which is available in ELK stack 6.6 newer version.
Please check this link:
This will create new index when size goes beyond 2GB or 1d, and it will delete 1day back data.
PUT _ilm/policy/stream_policy
{
"policy": {
"phases": {
"hot": {
"actions": {
"rollover": {
"max_size": "2GB" ,
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
Thank you mam,
can i delete kibana.log file using below command to free space? is it affect the data which i logged in kibana index?
==> cat /dev/null > kibana.log
maam can you answer my topic as well??
Hi @bbkunbi
The Kibana logs You are showing have nothing to do with the indices data that is stored within elasticsearch.
The one exception would be if you use filebeat to process the kabana logs and put them in elasticsearch.
I suspect you are not doing that.
Even with that, if you erase the Kibana log, you're not going to erase the data that resides inside elasticsearch... They are two completely separate things.
The Kibana logs that you are referring to are simple Kibana application logs.. nothing more.
You should refer to this
So yes you could clean up those logs to free up space.
Thank you!
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.