Deleting processed objects from S3 when using Beats input (and Filebeat with aws-s3 input)

Hi,
Can anyone suggest a (hopefully uncomplicated) way to delete processed log files from S3 when using Filebeat and Logstash with SQS and S3?

I had previously used the Logstash S3 plugin alone which does support this with the delete config option.
But I wanted to switch to using SQS also, since that should scale better (multiple Logstash nodes, HA, etc.).
My current setup is fairly simple - Filebeat using the aws-s3 input plugin - and Logstash using the beats input listener - then outputting to various places.

But this way I can see no built-in option to delete processed files :frowning:

I was thinking about switching to an S3 Lifecycle Policy - but that'll just delete after some time regardless of processing. I kinda liked the idea of Logstash doing it - so you knew for sure the file had been processed.

I was also thinking about adding another SQS output (back to a different queue) and using something like Lambda perhaps to delete the file. But that seems overcomplicated and an extra headache.

Does anyone know if there's an option to do this post-process pruning that I'm not seeing?

Thanks...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.