Hello,
I am using logstash with multiple output, with the below config
if [channel] == "test" {
elasticsearch {
hosts => ["elk_url"]
index => "logs-%{+YYYY.MM.dd}"
}
file {
codec => json_lines
path => "/tmp/logs/test.log"
flush_interval => 0
}
While doing so, I can see all events shipped to elastic/kibana, where as when i try to validate the same on the test.log, I see events are missing
My test.log is on a nfs share.
I see that the nds share is the bottelneck here. Is there a way to control the rate at which logstash sends output while using file module?
I have tried pipeline.batch.size
- tested with 500 : still losing data on share
- tested with 200 : still losing data on share
- tested with 125 : still losing data on share
- tested with 60 : still losing data on share
- tested with 1 : still losing date on share, the rate at which events are sent to both elastic and nfs share is also very slow.
Logstash config : 16 worker , 4gb heap
Filebeat config : 4096 mem.events, 2048 flush.min_events, 8 workers, 2048 bulk_max_size
Regards
Ashish