I'm using the plugin to get logs from HAProxy into the BigQuery tables. I can parse the lines using Grok and send the parsed data to the plugin.
My problem is that the plugin can't seem to handle uploads efficiently. The temp files keeps growing in number and eventually fills up the disk space. Here is my config:
flush_interval_secs => 2
uploader_interval_secs => 300
deleter_interval_secs => 2
It seems the uploader_interval_secs is used to stop appending data to one part file, and start a new one (as you can see below one file every 5 mins), while uploading the previous one. As the amount of files keep growing, I think the trigger for uploading is not working correctly or should be a separate configuration.
Here is a list of files that are not being uploaded as fast as it should:
total 1.2G
-rw-r--r-- 1 root root 42M Sep 11 12:35 logstash_bq_61e4914c872e_2017-09-11T12:00.part006.log
-rw-r--r-- 1 root root 44M Sep 11 12:40 logstash_bq_61e4914c872e_2017-09-11T12:00.part007.log
-rw-r--r-- 1 root root 38M Sep 11 12:45 logstash_bq_61e4914c872e_2017-09-11T12:00.part008.log
-rw-r--r-- 1 root root 37M Sep 11 12:50 logstash_bq_61e4914c872e_2017-09-11T12:00.part009.log
-rw-r--r-- 1 root root 39M Sep 11 12:55 logstash_bq_61e4914c872e_2017-09-11T12:00.part010.log
-rw-r--r-- 1 root root 36M Sep 11 13:00 logstash_bq_61e4914c872e_2017-09-11T12:00.part011.log
-rw-r--r-- 1 root root 39M Sep 11 13:05 logstash_bq_61e4914c872e_2017-09-11T13:00.part000.log
-rw-r--r-- 1 root root 37M Sep 11 13:10 logstash_bq_61e4914c872e_2017-09-11T13:00.part001.log
-rw-r--r-- 1 root root 36M Sep 11 13:15 logstash_bq_61e4914c872e_2017-09-11T13:00.part002.log
-rw-r--r-- 1 root root 38M Sep 11 13:20 logstash_bq_61e4914c872e_2017-09-11T13:00.part003.log
-rw-r--r-- 1 root root 39M Sep 11 13:25 logstash_bq_61e4914c872e_2017-09-11T13:00.part004.log
-rw-r--r-- 1 root root 35M Sep 11 13:30 logstash_bq_61e4914c872e_2017-09-11T13:00.part005.log
-rw-r--r-- 1 root root 34M Sep 11 13:35 logstash_bq_61e4914c872e_2017-09-11T13:00.part006.log
-rw-r--r-- 1 root root 37M Sep 11 13:40 logstash_bq_61e4914c872e_2017-09-11T13:00.part007.log
-rw-r--r-- 1 root root 38M Sep 11 13:45 logstash_bq_61e4914c872e_2017-09-11T13:00.part008.log
-rw-r--r-- 1 root root 37M Sep 11 13:50 logstash_bq_61e4914c872e_2017-09-11T13:00.part009.log
-rw-r--r-- 1 root root 39M Sep 11 13:55 logstash_bq_61e4914c872e_2017-09-11T13:00.part010.log
-rw-r--r-- 1 root root 38M Sep 11 14:00 logstash_bq_61e4914c872e_2017-09-11T13:00.part011.log
-rw-r--r-- 1 root root 40M Sep 11 14:05 logstash_bq_61e4914c872e_2017-09-11T14:00.part000.log
-rw-r--r-- 1 root root 40M Sep 11 14:10 logstash_bq_61e4914c872e_2017-09-11T14:00.part001.log
-rw-r--r-- 1 root root 39M Sep 11 14:15 logstash_bq_61e4914c872e_2017-09-11T14:00.part002.log
-rw-r--r-- 1 root root 37M Sep 11 14:20 logstash_bq_61e4914c872e_2017-09-11T14:00.part003.log
-rw-r--r-- 1 root root 40M Sep 11 14:25 logstash_bq_61e4914c872e_2017-09-11T14:00.part004.log
-rw-r--r-- 1 root root 39M Sep 11 14:30 logstash_bq_61e4914c872e_2017-09-11T14:00.part005.log
-rw-r--r-- 1 root root 39M Sep 11 14:35 logstash_bq_61e4914c872e_2017-09-11T14:00.part006.log
-rw-r--r-- 1 root root 40M Sep 11 14:40 logstash_bq_61e4914c872e_2017-09-11T14:00.part007.log
-rw-r--r-- 1 root root 39M Sep 11 14:45 logstash_bq_61e4914c872e_2017-09-11T14:00.part008.log
-rw-r--r-- 1 root root 37M Sep 11 14:50 logstash_bq_61e4914c872e_2017-09-11T14:00.part009.log
-rw-r--r-- 1 root root 38M Sep 11 14:55 logstash_bq_61e4914c872e_2017-09-11T14:00.part010.log
-rw-r--r-- 1 root root 37M Sep 11 15:00 logstash_bq_61e4914c872e_2017-09-11T14:00.part011.log
-rw-r--r-- 1 root root 36M Sep 11 15:05 logstash_bq_61e4914c872e_2017-09-11T15:00.part000.log
-rw-r--r-- 1 root root 39M Sep 11 15:10 logstash_bq_61e4914c872e_2017-09-11T15:00.part001.log
-rw-r--r-- 1 root root 2.4M Sep 11 15:10 logstash_bq_61e4914c872e_2017-09-11T15:00.part002.log
I can try getting a PR up, but need some guidance on the approach to fix this.