Is there any way to send to logs to s3 to overcome the space issue in elk server?

Hi , I'm facing a problem with my elk server. I have installed filebeat and metricbeat in my server . The root mount is getting filled with the logs and having space issue. Due to that the new logs are not getting loaded in elasticsearch. Can we overcome this problem to send logs to S3 from filebeat /metricbeat ?

Is there any scenario like filebeat(logs) -->S3--> kibana?

Please help on this...

Hi @krish2 ! :slightly_smiling_face:

You can use ILM to move your older indexes to cheaper machines (https://www.elastic.co/guide/en/beats/metricbeat/master/ilm.html and https://www.elastic.co/guide/en/elasticsearch/reference/master/index-lifecycle-management.html) or use snapshots and move that data to S3 as a backup https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html.

However, you cannot consume data directly from S3.

I hope it helps

Thank you Mario_Castro for the good suggestion . But I have a doubt that if we want to old indices to be presented in the elasticsearch so that to see the historical log monitoring, what is the best way for that?
Thanks in advance. And one more I just enabled and started the filebeat . My root mount size is 20 GB in that available is 13GB . After I started filebeat and logstash with in one hour the logstash throwing an error and the root mount almost full.

How can I overcome this issue.
please help me on this..

Hi,,

I have removed two directories from the location
/var/lib/elasticsearch/nodes/0/indices

These are the two indices that I removed from the above location.
after that I restarted the elasticsearch , It is failed to start. Please help on this.
3cQZvpsjRBmfAz_SLcWdpQ
uIxRIDZ0SLmHeJD-eREypA

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.