Logstash output s3 didn't work with condition

Hi all,

I make a config for logstash ,
I want if the logs match logtype==request and env==prd , it will save to Bucket 1
If the logstash logtype==request and env==stg , i will save to Bucket2.

However, there is something wrong. All logs saved on Bucket 1. I don't know why.
Is there something wrong with my config. Please help me to fix that.

Regards,
Khoa Nguyen

output {

  if [logtype] == "request" and [env] == "prd" {
    s3 {
        access_key_id => "XXX"
        secret_access_key => "XXX"
        bucket => "XXX1"
        endpoint_region => "us-east-1"
        time_file => 1
    }
  }
  if [logtype] == "request" and [env] == "stg" {
    s3 {
      access_key_id => "XXX"
      secret_access_key => "XXX2"
      bucket => "XXX"
      endpoint_region => "us-east-1"
      time_file => 1
    }
  }
}

Your configuration looks okay. Have you verified that the logtype and env fields have the values you expected, i.e. that they're not all equal to "request" and "prd"? Have you made sure that you don't have a leftover configuration file in /etc/logstash/conf.d that contains an s3 output declaration that unconditionall saves everything to bucket 1?

@magnusbaeck: I checked my configure many times. The logtype and env are correct.
I don't have any leftover configure to make all logs save on bucket 1. It's weird.

Do you have any idea to debug this issue ? Actually, the full config is following . Because I want to all log save into ES also, some logs go to S3 bucket.

output {
  elasticsearch {
    host => "XXX"
    protocol => "http"
    cluster => "XXX"
  }
  if [logtype] == "request" and [env] == "prd" {
    s3 {
        access_key_id => "XXX"
        secret_access_key => "XXX"
        bucket => "XXX1"
        endpoint_region => "us-east-1"
        time_file => 1
    }
  }
  if [logtype] == "request" and [env] == "stg" {
    s3 {
      access_key_id => "XXX"
      secret_access_key => "XXX2"
      bucket => "XXX"
      endpoint_region => "us-east-1"
      time_file => 1
    }
  }
}

I changed the config output of logstash which remove send log to second bucket.

It works properly. So I think the root cause is the output of S3 plugin .
This plugin allow only one bucket to save data.

output {
  elasticsearch {
    host => "XXX"
    protocol => "http"
    cluster => "XXX"
  }
  if [logtype] == "request" and [env] == "prd" {
    s3 {
        access_key_id => "XXX"
        secret_access_key => "XXX"
        bucket => "XXX1"
        endpoint_region => "us-east-1"
        time_file => 1
    }
  }
 }