Hi everyone,
I have an ELK environment installed. One of the client requirements is to have a backup of the records. For this reason I created a bucket S3 and I'm using a 'logstash-output-s3' plugin.
At the moment I have already configured the whole environment and it is working correctly. However I have some doubts seeing files uploaded to S3 bucket. I have the follow 'logstash-output-s3' plugin conf:
s3{
region => "us-east-1"
bucket => "spelkbucket"
codec => "line"
time_file => 1
}
I'm adding a 1log/minute. And each minute is uploaded one file with a incremental part. However this file aren't > default 'size-file' (5242880 bytes) .
I have been write a 1log/minute. This is my data into S3 Bucket:
Name File: ls.s3.904a7212-6750-4ec2-8442-982a48d02aa3.2017-08-08T06.47.part0.txt
Date: Aug 8**, 2017 12:48:52 PM GMT+020
Name File: ls.s3.550b7426-483f-4b37-b4a8-eacf83223d0a.2017-08-08T06.48.part1.txt
Date: Aug 8, 2017 12:50:00 PM GMT+0200
Name File: ls.s3.67ec0fbc-5c9c-456c-a22c-e39bf8842261.2017-08-08T06.49.part2.txt
Date: Aug 8, 2017 12:51:05 PM GMT+0200
Why logstash is segment files in parts?? The files aren't > 'size_file'. In documentation say the follow:
part0 --> this means if you indicate size_file then it will generate more parts if you file.size > size_file. When a file is full it will be pushed to the bucket and then deleted from the temporary directory. If a file is empty, it is simply deleted. Empty files will not be pushed
Other question about the 'time_file' setting. Do it is the time wait to uploaded tmp files to S3?
Thanks a lot!
Best regards,
Javier