S3 output plugin does NOT write data into S3 fast enough

I'm using the file input logstash plugin to detect new lines in my input files and using the S3 output plugin to have logstash writing data into S3.

I'm invoking logstash like this:

/usr/share/logstash/bin/logstash -f /home/ubuntu/foo.conf --path.settings=/etc/logstash

Here is what foo.conf looks like:

input { 
  file { 
    path => [ "/var/tmp/foo/*.txt" ]
    sincedb_path => "/var/tmp/foo/.sincedb"
    ignore_older => 5
    close_older => 20
    id => "file_input_foobar"
  } 
}

output {
  s3 {
     access_key_id => "xxx"
     secret_access_key => "xxx/xx"
     region => "us-east-2"
     bucket => "myBucket"
     rotation_strategy => "size"
     id => "output_s3_foobar"
     time_file => 1
     upload_queue_size => 1
   }
}

If I append new lines into my input file(s), the above logstash command would immediately add the new lines into S3 as now S3 objects on command startup. But after that if I manually add new lines into my input file(s), logstash doesn't do anything.

Then, when I press control-C to quit logstash, it would then write the new lines I added (since the command invocation) into S3.

How can I make logstash be more frequent in flushing out new lines into S3 ?

Here is what my logstash.yml looks like:

pipeline.batch.delay: 5
node.name: test
path.data: /var/lib/logstash
pipeline.workers: 2
pipeline.output.workers: 1
pipeline.batch.size: 1
pipeline.batch.delay: 5
path.config: /etc/logstash/conf.d

config.reload.automatic: true
config.reload.interval: 3
queue.type: persisted
queue.page_capacity: 100mb
queue.max_events: 1
queue.checkpoint.acks: 2

queue.checkpoint.writes: 2
log.level: debug
path.logs: /var/log/logstash

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.