How to increase the data sent out by logstash forwarder


(Arun John V) #1

I currently see the following from the logstash-forwarder logs :

2015/10/19 04:42:01.697852 --- options -------
2015/10/19 04:42:01.697913 config-arg: /etc/logstash-forwarder.conf
2015/10/19 04:42:01.697925 idle-timeout: 5s
2015/10/19 04:42:01.697930 spool-size: 1024
2015/10/19 04:42:01.697934 harvester-buff-size: 16384
2015/10/19 04:42:01.697937 --- flags ---------
2015/10/19 04:42:01.697941 tail (on-rotation): false
2015/10/19 04:42:01.697945 log-to-syslog: false
2015/10/19 04:42:01.697949 quiet: false
2015/10/19 04:42:01.698390 {
"network": {
"servers": [ "10.0.0.7:5000" ],
"timeout": 15,
"ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
},
"files": [
{
"paths": [ "/var/log/squid/*" ],
"fields": { "type": "squid" }
}
]
}

2015/10/19 06:58:34.397528 Registrar: processing 1024 events
2015/10/19 06:58:34.870352 Registrar: processing 1024 events
2015/10/19 06:58:35.323514 Registrar: processing 1024 events
2015/10/19 06:58:35.802681 Registrar: processing 1024 events
2015/10/19 06:58:36.345547 Registrar: processing 1024 events

My questions are as follows:

  1. How can I increase the 1024 value ?
  2. What are the implications or consequences if me make such changes ?
  3. What are the limits or maximum value that can be configured ?

My current ELK stack is configured as follows:

Logstash-forwarder > Logstash > ElasticSearch < Kibana.

Also do I have to make any changes to the Logstash receiver to accommodate the incoming volume of data ?


(Magnus B├Ąck) #2
  1. How can I increase the 1024 value ?

Use the -spoolsize command-line option.

What are you trying to accomplish? You're only flushing about twice a second with the default settings so I doubt the overhead of flushing is killing your performance.


(system) #3