How to increase logstash throughput for file input and file output plugins. I am using logstash on windows environment. My sample test produces 1.7GB of log file via stream writer, and I am moving these log messages via logstash to some other location in the same machine (and on same drive). To move 1.7 GB content, log stash takes more than 30 minutes or so. Is there any way I can improve this time? Below is my sample log stash configuration, and sample file to generate logs.
@Christian_Dahlqvist: I tried after removing flush_interval =>0 and still it takes more than 30 minutes to sync 1.8 GB file. And I agree that increasing worker threads suggested by @warkolm will not have any impact, as it impacts only filters worker threads.
We have various clients machines which are generating logs in txt files. We want to move all these text logs to one centralized server where it be searched & provided it to the user. My test case is just to check how fast is the logstash syncing process with file input and output. The time which logstash takes to move the files is crucial as well.
How are the logs going to be searched once they have been moved to the central location? If you are planning on using Elasticsearch for this, there is generally no need to write logs into a central location before processing them and then indexing them directly into Elasticsearch.
We are not planning to use elastic search for now. In our initial plan, logs will be moved in real time by logstash to centralized server and we provide some basic search utility to query the logs like query based on folder name.
So the scope of our initiative is very limited, we want to use logstash to move logs from different client machines to the centralized server.