Read the file input plugin docs File input plugin | Logstash Reference [7.16] | Elastic read mode supports gzip file processing but I believe you have to define a gzip codec then in your input.
However, try it without the codec and see if just the read works on it's own. I haven't tried that before.
Each line from each file generates an event. Files ending in .gz are handled as gzip’ed files.
Since the S3 input is line-oriented, if the contents of your GZIP files are not line-oriented (such as each being a JSON blob representing a single JSON object), you may need to use the multiline codec to buffer all of the lines into a single event, and then a json Filter to parse the contents into a structured object:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.