Hallo,
Currently im using Elastic Kibana Filebeat and logstash to colellect log accross server and docker , and suddenly the drive got too bigs and take so much disk space, what can i do to delete 3 old month logs ?
thank you so much,
Hallo,
Currently im using Elastic Kibana Filebeat and logstash to colellect log accross server and docker , and suddenly the drive got too bigs and take so much disk space, what can i do to delete 3 old month logs ?
thank you so much,
Hi @Agaaam,
Welcome! Are you indices dated? If so you can delete the older indices. If not, and you need a quick fix to delete documents from an index, you can do this using a delete by query using a range query similar to this one in the documentation for the last day to identify documents with timestamps greater than 3 months old.
Just a warning that I would check the query first with a _search
before running the delete to make sure you are happy with the results.
Longer term I would also recommend looking at using ILM to manage deletion of older logs and indices automatically.
Hope that helps!
Thank you so much for answer,
i this can also to implement for log from application running under docker , that i sent using file beat to logstash ?
i already try using
http://11.21.12.44:9200/filebeat-8.5.0-*/_search?pretty
request : {
"query": {
"range": {
"timestamp": {
"gte": "now-1d/d",
"lte": "now/d"
}
}
}
}
only get resp :
{
"took": 0,
"timed_out": false,
"_shards": {
"total": 0,
"successful": 0,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 0,
"relation": "eq"
},
"max_score": 0.0,
"hits": []
}
}
thank you
Just a heads up @Agaaam that the above query is for a single day. Can you try using the field @timestamp
instead of timestamp
in your range query?
@carly.richmond alright thank you i already find it , thank you so much for your help
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.