How to efficiently parse log files in logstash?

Hi,

I have to parse huge amount of log files in logstash. I stored all log files in one directory with .gz format. when I try to extract those logs the size becomes very high. Let's say the log file size is 50MB when it is in .gz format. but after extract that it will become 1 GB. can I parse these logs in effective way instead of unzip all the log files at a time? or can I extract those log files one by one when once a log files is completely parsed in logstash?

Thanks.

Have a look at the gzip_lines codec.

Yeah I tried gzip_lines codec input plugin but it throws error message,

A plugin had an unrecoverable error. Will restart this plugin. Plugin: ["/test.txt"], start_position=>"beginning", sincedb_path=>"/dev/null", codec=>"UTF-8">, stat_interval=>1, discover_interval=>15, sincedb_write_interval=>15, delimiter=>"\n">
Error: Object: sample.gz is not a legal argument to this wrapper, cause it doesn't respond to "read". {:level=>:error}

My LogStash input plugin

file {
path => "/test.txt" (this file contains list of gunzip files)
start_position => "beginning"
sincedb_path => "/dev/null"
codec => "gzip_lines"
}

And I also tried path => "/sample.gz" directly. but the error was same.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.