SQS Input Pipeline Blocking Pipeline Management Updates and Terminate Job

I have a very simple SQS input to ES output pipeline that is blocking the logstash job. Just starting logstash with no messages in the SQS and trying to stop the job with Ctrl + C I get the following error

[2019-05-22T10:33:58,012][WARN ][logstash.runner          ] Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause
data loss.
[2019-05-22T10:33:58,199][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>54, "name"=>"[sqsInputES]<sqs", "current_call"=>"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/protocol.rb:181:in `wait_readable'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["@version", "@timestamp"], "id"=>"03f51ae761f58a883038a585d66a2a24135a3866a1eae8b64015aeacafd9eba1"}]=>[{"thread_id"=>35, "name"=>"[sqsInputES]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
[2019-05-22T10:33:58,201][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.

I am using Elastic Cloud and Pipeline Management. If I update the SQS pipeline in Pipeline Management I get the following errors

[2019-05-22T10:22:58,276][INFO ][logstash.pipelineaction.reload] Reloading pipeline {"pipeline.id"=>:sqsInputES}
[2019-05-22T10:23:03,340][WARN ][org.logstash.execution.ShutdownWatcherExt] {"inflight_count"=>0, "stalling_threads_info"=>{"other"=>[{"thread_id"=>59, "name"=>"[sqsInputES]<sqs", "current_call"=>"uri:classloader:/META-INF/jruby.home/lib/ruby/stdlib/net/protocol.rb:181:in `wait_readable'"}], ["LogStash::Filters::Mutate", {"remove_field"=>["@version", "@timestamp"], "id"=>"03f51ae761f58a883038a585d66a2a24135a3866a1eae8b64015aeacafd9eba1"}]=>[{"thread_id"=>38, "name"=>"[sqsInputES]>worker0", "current_call"=>"[...]/logstash-core/lib/logstash/java_pipeline.rb:235:in `block in start_workers'"}]}}
[2019-05-22T10:23:03,343][ERROR][org.logstash.execution.ShutdownWatcherExt] The shutdown process appears to be stalled due to busy or blocked plugins. Check the logs for more information.

If I remove the SQS pipeline and create one almost identical with the main difference a http input. I am able to stop the job without problems and update the pipeline using Pipeline Management.

Is there something wrong with the SQS input that is leaving threads open?

I am using Logstash 7.1 running in Windows 10.

This is the pipeline conf I am using

input {
    sqs {
        access_key_id => "myKeyId"
        secret_access_key => "mySecretAccessKey"
        id_field => "[@metadata][sqsMessageId]"
        md5_field => "[@metadata][sqsMessageMd5]"
        queue => "Queue.fifo"
        region => "us-west-1"
        threads => 1
    }
}

filter {
    mutate {
        remove_field => ["@version", "@timestamp"]
    }
}

output {
    stdout {
		codec => "dots"
	}
	elasticsearch {
		hosts => ["https://xyz.us-west-1.aws.found.io:9243"]
		user => "${ES_USER}"
		password => "${ES_PASSWORD}"
        document_id => "%{id}"
        index => "myIndex"
	}
}

same situation with 6.8.0

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.