First I thought that the problem occured on pipe configuration, but now it looks like problem with workers: my logstash has 4 workers: when logstash starts took one worker for files_log and two workers for access_log. Somehow (I thought workers stay always pinned to pipelines) when new file in files_log appears logstash can't assign worker to pipeline and I must restart logstash process to workers re-assign.
Where I made a mistake on config? I can't find the error alone :\
If you set that in logstash.yml then any pipelines that do not have a pipeline-specific setting for pipeline.workers will have 4 worker threads. Since both your pipelines do have a pipeline-specific setting it really has no effect.
Hello,
Thanks for replay.
Here is my configuration (input section with details, filter ans output are probably without issues (I didn't notice any problems with filter or saving logs in ES):
Hello Badger,
Thanks for your answer. I deleted exit_after_read => "true"
and looks better. .. but is not working fully right.
Right now after restart: logstash proceeded over files which was on the folder and waited for the new one.
After new file transfer Logstash processed three of six files (leave three without parsing). After next transfer (5 files) took again three of them. So it is better but is not ideal.
I don't get it why is not determistic...
Regards,
Karl Wolf
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.