Logstash S3 output plugin fails if bucket name do not have a underscore (_)

I am using logstash s3 output plugin to upoad files on ganesha s3 buckets.
I must include an underscore in the bucket name to make the logstash S3 output plugin work.
If there is no underscore, I get this error:
:error=>"Logstash must have the privileges to write to root bucket applogs, check your credentials or your permissions."

This error is misleading becasue I know I can access (read, write) the buckets using s3 cli.

My S3 output configuration is:
s3 {
access_key_id => "XXXXXXXX"
secret_access_key => "XXXXXXXXX"
region => "us-east-1"
endpoint => "https://s3-ganesha.local"
bucket => "applogs"
additional_settings => {
"force_path_style" => true
}
}

I started logstash in debug mode; but, could not get any pointers.

I am using logstash logstash:5.6.9 docker image

The same above configuration will work if I change the bucket name to "app_logs" instead

I've only used the S3 input plugin, but afaics it doesn't need the endpoint parameter. Do you really need it?

When AWS compose a domain name for buckets, they use the SSL cert for *.amazonaws.com. I've found that if your bucket name has dots in it, and you hit, for example, https://johnsmith.net.s3.amazonaws.com, it will fail with a certificate error. If you change it to https://johnsmith-net.s3.amazonaws.com, it works just fine.

So, in short, it might be an SSL error due to the use of HTTPS and dots in your bucket name. My S3 input plugin is working just fine with a dot in the bucket name, but I don't specify an endpoint.

https://s3-ganesha.local is an internal local tool based on NetApp's DataGrid. So, in my case case, I am not connecting to the usual aws storage service. That's why I have to explicitly set the endpoint.

bucket name do not have a dots in it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.