Pipeline with Persistent Queues stops working after some hours


#1

Hello,

i have a pipeline with the following plugins:
input: Kafka,Heartbeat
filters: Metric,Sleep,Clone,Mutate
outputs: File,Kafka,Jms (Websphere MQ Queue)

after working fine for some hours, logstash stops to process events. The Heartbeat stops too. If i enter in DEBUG mode i still see "Pushing flush onto pipeline" but no events are processed. If i restart it works fine again for some hours until it stops again.

I tryed to change settings inside jvm.options and logstash.yml files without success.

Someone has already sees this behaviour? what's really strange to me is that Heartbeat stops printing its outputs too.

thank you in advance for the support.


(Christian Dahlqvist) #2

What does your config look like? How come you are using the sleep filter?


#3

My pipeline is Kafka 0.9 as Input (version 4.2.0), then in the Filter i use Sleep, Clone (2 clones), and Mutate (renaming and adding fields).
The Outputs are Kafka (4.0.4), JMS (3.0.1) writing to IBM MQ Queue, and local file. I use the Heartbeat plugin and the Metric one for monitoring my pipeline.

The Sleep part is:
sleep {
time => "1"
every => 1
}
then immediatly after i Clone:
clone {
clones => [ "xxx","yyy" ]
}

I found out that i can replicate the problem setting the parameter queue.max_bytes to a very small number (10 mb); this way Logstash stalls almost immediately. If i set this parameter to 3 gb it stalls after around 1 day.
What i'm missing here?


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.