Regarding Logstash Persistent Queus

(Archelle Pagapulan) #1

Hello all,

This issue bugs me for one week now.

We have a cluster of 4 ES nodes. 3 of it is a master/data eligible and 1 coordinating node.
we also have logstash running on the same server of master/data eligible nodes
also we have kibana running on coordinating node.

So our issue is that, we have implemented persistent queues in logstash. This works well at first. But as the logs continues to grow, the logstash are failing in a way that it didnt ingest the data in elasticsearch. After checking the queues directory, we found out many checkpoint file in it. We dont have any access on our filebeat servers but the team who is handling that says it is working fine. I bet he is correct because we can see our metricbeat report works well. I think, there is a bottleneck in our logstash setup.
Could somebody discuss us what are this files? and the page file keeps growing.


We have this in our logstash.yml to enable persistent queues

queue.type: persisted
queue.max_events: 1000
queue.max_bytes: 8gb

Our logstash output

              elasticsearch {
              hosts => ["host1:9200", "host2:9200","host3:9200"]
              index => "test-%{+YYYY.MM.dd}"

(system) #2

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.