How can i accelerate the queue output speed in logstash

We encountered an network failure yesterday,and before that we enable the persist queue .So after the network became normal,we can see my queue is full, but this cause i can not see the real-time data in my kibana,

-rw-r--r-- 1 logstash logstash 262144000 Jan 3 09:36 page.711
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 09:46 page.712
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 09:55 page.713
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:06 page.714
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:15 page.715
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:25 page.716
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:36 page.717
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:45 page.718
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 10:54 page.719
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:03 page.720
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:12 page.721
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:23 page.722
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:33 page.723
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:41 page.724
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:44 page.725
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 11:45 page.726
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:11 page.727
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:21 page.728
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:29 page.729
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:40 page.730
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:50 page.731
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 12:59 page.732
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 13:07 page.733
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 13:18 page.734
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 13:32 page.735
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 13:43 page.736
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 13:53 page.737
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 14:04 page.738
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 14:14 page.739
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 14:25 page.740
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 14:34 page.741
-rw-r--r-- 1 logstash logstash 262144000 Jan 3 14:39 page.742

we can see previous data is still in the queue, is there any way to speed up the processing speed of the queue , so that all the data can output to elasticsearch asap.

What is your Elasticsearch cluster looking like? Is it saturated?

Current Cluster Having The Following:

  • 3 Master, 2 Data and 2 Coordinating Nodes

Which manner of saturation are you referring to? CPU, Memory?

It could be CPU, disk I/O or even memory, resulting in excessive GC. Do you have monitoring installed?

Yes,in my kibana all the metrics are normal.Now the question is if there are mass of data in my logtash persisted queue, and at the same time , the logstash keep receiving the new data, which causes the queue decrease slowly, is there any way to solve the issue ?

I would recommend identifying what is limiting throughput. Logstash will only be able to process data as fast as the slowest output, so if Logstash is not limited by CPU (this is often what limits Logstash) or have an unusually low number of worker threads, it is likely that the destination(s) is the limiting factor.

It may help if you show your Logstash config so we can see if there are any potential issues/inefficiencies there.

Hi Christian,

I am Antony colleague and below are the information relating to our logstash environment

The logstash is running as a virtual machine with 4 vCPU and 8 GB of RAM

logstash.yml
node.name: LS
path.data: /var/lib/logstash
pipeline.workers: 4
pipeline.output.workers: 4
path.config: /etc/logstash/conf.d
config.reload.automatic: true
config.reload.interval: 10
modules:

  • name: netflow
    var.input.udp.port: "2055"
    var.elasticsearch.hosts: "10.10.10.11:9200,10.10.10.12:9200"
    var.kibana.host: "10.10.10.10:5601"
    queue.type: persisted
    queue.max_bytes: 8192mb
    dead_letter_queue.enable: true
    http.host: "127.0.0.1"
    http.port: 9600-9700
    log.level: info
    path.logs: /var/log/logstash

jvm.option
-Xms4g
-Xmx4g

Thank You In Advance

What does CPU usage on the Logstash VM look like when it is catching up? What does Elasticsearch CPU and disk I/O look like at the same time?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.