Modifying your logstash.yml to have log.level: warn or log.level:error might help. But it might also be worth reading those logs and understanding why it is logging so much.
For example, in one of my instances, I had hundreds of megabytes of errors from xml filters that were failing because the xml was getting truncated upstream. Adding a simple check to see if the xml text ended in > before applying the xml filter cut the logs by 95%.
@Badger, True. Even for me, i had written ruby filters for 4.x version and the same code i am using for 6.2 version. that might be the problem. I am checking the same but meanwhile i wanted to stop the logs. I changed it to fatal level. Now, for time being, nothing is being written.
If you set path.logs to /dev/null then you get an exception
2018-06-12 16:03:48,264 main ERROR Unable to create file /dev/null/logstash-plain.log java.io.IOException: File /dev/null exists and is not a directory. Unable to create directory.
I expect you could update log4j.properties to avoid this.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.