Logstash.inputs.s3 failed fetching events from S3

Hello there,

I set up the logstash s3 input plugin to fetch event/logs from CloudTrail logs on S3.
The following error filled in the logstash plain log:

=========================
...
[2018-08-08T03:40:22,523][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2018-08-08T03:40:22,593][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-08-08T03:40:23,009][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2018-08-08T03:40:25,084][INFO ][logstash.inputs.s3 ] Using default generated file for the sincedb {:filename=>"/var/lib/logstash/plugins/inputs/s3/sincedb_77d547f7f2589dd0e256b37dfde58574"}
[2018-08-08T03:40:29,068][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
[2018-08-08T03:40:29,071][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})
....

I can see an index with name "%{[@metadata][beat]}-%{[@metadata][version]}-2018.08.07" there but can not delete it....
Here is my configuration file:

==========================
input {
beats {
port => 5044
}
s3 {
access_key_id => "12345"
secret_access_key => "678910"
region => "us-east-1"
bucket => "aaa-apaas-bbb-cloudtrail"
prefix => "AWSLogs/000000000/CloudTrail/us-east-1/2018/08/"
#storage_class => standard
codec => "json"
}
}

The filter part of this file is commented out to indicate that it

is optional.

filter {

}

output {
elasticsearch {
action => "index"
hosts => ["xx.xxx.xx.xx"]
manage_template => false
index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
}
s3 {
access_key_id => "12345"
secret_access_key => "678910"
region => "us-east-1"
bucket => "bucket-name"
}
}

Please help.

Thank you very much.

[2018-08-08T03:40:29,071][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"})

The index is read-only, possibly because the cluster is in a bad shape. Focus your attention on that, not on your s3 input.

index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"

Since your data source isn't beats the metadata fields referenced here won't exist so you won't end up with the index name you expect.

This can often indicate that you are running out of disk space in your Elasticsearch cluster and have reached a high watermark.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.