High Availibility for Logstash Input Processing

hello, i'm currently using a logstash server to process logs in an AWS s3 bucket via the s3 input, and push them to an AWS elasticsearch service cluster.

however, i need it to have production level redundancy ie. high availability on processing these s3 logs, with an instance ready to failover when required and pickup where it left off with no duplicate/overlap, which is the main part i'm not fully across..

i've setup logstash instances with failover previously behind a load balancer with a port check, but thats with multiple servers with beats pushing to it... i'm not quite sure how to approach this situation where the input processing itself needs the redundancy.. any advice?

Logstash doesn't help you much here. You can't have more than one s3 input reading from the same bucket unless you filter duplicates later on. That's probably the easiest option though.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.