S3 Input not working


(Zorlin) #1

Hey there,

We have a Logstash setup processing a bunch of different inputs successfully.

Unfortunately, the S3 input is not working.

Here is the current config I'm working with:

input {
s3 {
bucket => "production-logs-elb"
region => "ap-southeast-2"
type => "prod-elb"
prefix => "AWSLogs/481913130099/elasticloadbalancing/ap-southeast-2/"
}
}

Logstash doesn't seem to be logging any errors.


(Mark Walkom) #2

What does that mean exactly?


(Zorlin) #3

Hi Mark,

No logs are making it into ES/Kibana. We have ES creating indices based on the type, like so:

output {
amazon_es {
hosts => ["redacted.es.amazonaws.com"]
region => "ap-southeast-2"
manage_template => false
index => "%{[type]}-%{+YYYY.MM.dd}"
}
}

And there are no prod-elb indices being created.


(Mark Walkom) #4

And if you add a stdout {codec => rubydebug} does it show anything?


(Zorlin) #5

Hi - may test that later, but we have started on an alternative path already. We created a new S3 bucket and imitated the structure of the old one, and pulled in some logs to test - and now Logstash correctly processes those. In theory only three things have changed - a) name of bucket, b) number of folders and objects in those folders c) no Glacier objects.

We believe we've narrowed it down to two issues - either there was too many files in the old bucket, causing LS to choke, or the S3 input plugin doesn't like Glacier files. We're testing those theories now.


(system) #6

(Ry Biesemeyer) #7

The S3 input will currently fail when attempting to process a glacier-archived file, but I've opened up a PR on the plugin to add support: https://github.com/logstash-plugins/logstash-input-s3/pull/160