Thanks Steffens,
I did quite some testing the last fews days. And most of the 'behind the scenes' work.
I can stop elasticsearch nodes. Let the events queue up. Restart elasticsearch and pending events are being send over.
But in production under heavy load. I get this error:
2019-05-10T18:28:10.029Z ERROR [publisher] spool/inbroker.go:544 Spool flush failed with: pq/writer-flush: txfile/tx-alloc-pages: file='/var/lib/statsdbeat/spool.dat' tx=0: transaction failed during commit: not enough memory to allocate 255 data page(s)
I'm using the default 4k page size. I have a large pre-allocated disk queue. And 4Gb of internal memory.
The error suggest an out of memory error. But it happens when it tries to commit the events to file. pq/writer_flush
The 4k page size is the file block allocation size. So making that smaller won't help
Is it internal memory. Or do I need a larger disk queue size. Or just the input stream is too high for the output stream to process?