Always get SIGTERM when starting Logstash with s3 output

I use stable/logstash helm chart (chart version 2.3.0, app version 7.1.1) to deploy logstash. I have no issue using elasticsearch output, but always get SIGTERM with s3 output. I don't see any obvious cause of this in the logstash log. Any help would be appreciated.

This is how I configured the elasticsearch output, which works:

outputs:
  main: |-
    output {
      elasticsearch {
        hosts => ["${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}"]
        index => "%{[@metadata][type]}-%{+YYYY.MM.dd}"
        manage_template => true
        template_overwrite => true
        template => "/usr/share/logstash/files/edge-template.json"
        template_name => "logstash-edge"
      }
    }

elasticsearch:
  host: "es-prod-master"

This is how I configured s3 output, which always got SIGTERM:

outputs:
  main: |-
    output {
      s3 {
        access_key_id => "xxxxxxx"
        secret_access_key => "xxxxxxxx"
        region => "eu-central-1"
        bucket => "xxxx"
        size_file => 1048576
        time_file => 5
        prefix => "%{+yyyy.MM.dd.HH.mm}"
      }
    }

Following is the logstash log when SIGTERM is received using s3 output:

2019/10/13 04:55:04 Setting 'queue.max_bytes' from environment.
2019/10/13 04:55:04 Setting 'path.config' from environment.
2019/10/13 04:55:04 Setting 'queue.drain' from environment.
2019/10/13 04:55:04 Setting 'http.port' from environment.
2019/10/13 04:55:04 Setting 'http.host' from environment.
2019/10/13 04:55:04 Setting 'path.data' from environment.
2019/10/13 04:55:04 Setting 'queue.checkpoint.writes' from environment.
2019/10/13 04:55:04 Setting 'pipeline.batch.size' from environment.
2019/10/13 04:55:04 Setting 'queue.type' from environment.
2019/10/13 04:55:04 Setting 'pipeline.workers' from environment.
2019/10/13 04:55:04 Setting 'config.reload.automatic' from environment.
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by com.headius.backport9.modules.Modules (file:/usr/share/logstash/logstash-core/lib/jars/jruby-complete-9.2.7.0.jar) to field java.io.FileDescriptor.fd
WARNING: Please consider reporting this to the maintainers of com.headius.backport9.modules.Modules
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Thread.exclusive is deprecated, use Thread::Mutex
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2019-10-13T04:55:16,870][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-10-13T04:55:16,880][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"7.3.0"}
[2019-10-13T04:55:18,362][INFO ][org.reflections.Reflections] Reflections took 33 ms to scan 1 urls, producing 19 keys and 39 values
[2019-10-13T04:55:22,420][WARN ][logstash.runner          ] SIGTERM received. Shutting down.

I've found the issue. This is due to the pod liveness check in K8s, as pointed out by https://stackoverflow.com/questions/56593504/receive-sigterm-on-logstash-startup-version-7-1-1/56784774#56784774

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.