Log files getting accumulated in temporary_directory path while reading logs from s3 buckets

The input transfers the file from s3 to the temporary directory before processing it. It always tries to delete the file after processing it. See this thread for more info.

The json codec does not support timestamps in UNIXMS format. You could remove the codec, use mutate+gsub the change the fieldname from "@timestamp" to something else, then use a date filter to parse it and overwrite [@timestamp].