Probably because you have bad data going through your pipeline that isn't being accepted by Elasticsearch, so it's filling the log with the (relatively large) json response.
Logstash uses log4j, so one would have to check /etc/logstash/log4j2.properties to see what the rolling policy on the appender is. In 6.3 the default is daily rotation plus size based rotation plus gzip of old logs. But that was just introduced. I would not be surprised if older versions used a plain DailyRollingFileAppender, so they just had one log per day.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.