I have to parse huge amount of log files in logstash. I stored all log files in one directory with .gz format. when I try to extract those logs the size becomes very high. Let's say the log file size is 50MB when it is in .gz format. but after extract that it will become 1 GB. can I parse these logs in effective way instead of unzip all the log files at a time? or can I extract those log files one by one when once a log files is completely parsed in logstash?
Yeah I tried gzip_lines codec input plugin but it throws error message,
A plugin had an unrecoverable error. Will restart this plugin. Plugin: ["/test.txt"], start_position=>"beginning", sincedb_path=>"/dev/null", codec=>"UTF-8">, stat_interval=>1, discover_interval=>15, sincedb_write_interval=>15, delimiter=>"\n"> Error: Object: sample.gz is not a legal argument to this wrapper, cause it doesn't respond to "read". {:level=>:error}
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.