RESOURCE_LOCKED messages

I have a three Logstash instances running a fairly old release ( 6.2.3). Two are running in a standby mode, and one in a active mode. The node, which has the vip is considered the active node.
All this said, all three instances are running, but only the active node processes records.
After some restarts of all three nodes, I see now the following messages on all three nodes:
[WARN ][logstash.inputs.rabbitmq ] Error while setting up connection for rabbitmq input! Will retry. {:message=>"#method<channel.close>(reply-code=405, reply-text=RESOURCE_LOCKED - cannot obtain exclusive access to locked queue '' in vhost '/', class-id=50, method-id=10)", :class=>"MarchHare::ChannelAlreadyClosed", :location=>"/opt/northstar/thirdparty/logstash/vendor/bundle/jruby/2.3.0/gems/march_hare-3.1.1-java/lib/march_hare/exceptions.rb:121:in `convert_and_reraise'"}

I have restarted the rabbitmq server (remote node) as well as the three logstash instances, but I can not resolve this issue.

Does anyone have an idea how to solve this issue?

Thanks

Anyone?

Are you using a persistent queue on disk?
Can you show pipelines.yml?

Also might be something on Rabbit side, check link1 link2 link3

All the parameters are commented out (default settings), so there is no persistent queue on disk.
For the pipeline in question, 5 workers are configured and batch size is set to 500

Then I would go for Rabbit side, check links.

As @Rios already mentioned, this seems to be a issue on the RabbitMQ side, logstash is trying to connect and the rabbitmq server is returning the error you got.

But, can you share your logstash input configuration?

The input for rabbitmq is nothing special:

input {

rabbitmq {
    host => "${MQ_HOST:localhost}"
    user => "${MQ_USERNAME:northstar}"
    password => "${MQ_PASSWORD}"
    port => "${MQ_PORT:5672}"
    exchange => <exchage-name>
    queue => <queue-name>
    key => <key-name>
    codec => thrift {
        classname => <class-name>
        file => <file-name>
        protocol_factory => "CompactProtocolFactory"
    }
}
...

}

I agree that the issue is probably on the rabbitmq side.

From links above:

  1. Don't do the queue mirror for st2.trigger.watch* queue fixed the issue.

rabbitmqctl set_policy ha-two '^(?!st2\.trigger\.watch\.).*' ' {"ha-mode":"exactly","ha-params":2,"ha-sync-mode":"automatic"}

  1. [High concurrency brought in cannot obtain exclusive access to locked queue ]](Exclusive queues can be orphaned · Issue #1323 · rabbitmq/rabbitmq-server · GitHub)

  2. Don't use an exclusive queue, just use auto-delete. Exclusive queues are just that - exclusive to the connection that declares them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.