Logstash-plain.log is 17G in size


#1

Why is logstash-plain.log 17G in size?

ls -lh /path/logstash/path_logs

-rw-r--r-- 1 root root 4.8M Jul 17 23:59 logstash-plain-2018-07-17.log
-rw-r--r-- 1 root root 17G Jul 18 22:43 logstash-plain.log

cat /path/logstash/path_logs/logstash-plain-2018-07-17.log

...
[2018-07-17T23:59:49,973][WARN ][logstash.outputs.elasticsearch] Could not index event to Elasticsearch. {:status=>400, :action=>["index", {:_id=>"641401326", :_index=>"test", :_type=>"test", :_routing=>nil}, #LogStash::Event:0x7af1884b], :response=>{"index"=>{"_index"=>"test", "_type"=>"test", "_id"=>"641401326", "status"=>400, "error"=>{"type"=>"mapper_parsing_exception", "reason"=>"failed to parse [log_date]", "caused_by"=>{"type"=>"illegal_argument_exception", "reason"=>"Invalid format: "...""}}}}}


(Mark Walkom) #2

Probably because you have bad data going through your pipeline that isn't being accepted by Elasticsearch, so it's filling the log with the (relatively large) json response.


#3

Is it normal to have 17g file? At what size does it roll?


(Mark Walkom) #4

I am not sure how Logstash handles log rotation, sorry. Perhaps someone else can comment there.


#5

Logstash uses log4j, so one would have to check /etc/logstash/log4j2.properties to see what the rolling policy on the appender is. In 6.3 the default is daily rotation plus size based rotation plus gzip of old logs. But that was just introduced. I would not be surprised if older versions used a plain DailyRollingFileAppender, so they just had one log per day.


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.