Custom Beat - spool disk queue

I have a simple community beat that sends messages received on a udp channel to elasticsearch.

	bt.client, err = b.Publisher.ConnectWith(beat.ClientConfig{
		PublishMode: beat.GuaranteedSend,
		WaitClose:   10 * time.Second,
	})
        .... 
       bt.client.PublishAll(bt.buffer)

the data is being send to elasticsearch. all ok.
but when I configure to the the

queue:
    spool:
       file:

I see the data being flushed to disk, and it still makes it to elasticsearch.

But how do I send an acknowledgement (and to where) so that the queue spool file can remove that record.

It seems over time that I'm running out of disk space in the ring

Or don't I have to code anything for this and it is done behind the scene?

The spool file has a max size. Once this size is reached, it won't allocate more space. Internally it's a simple dynamic transactional on-disk queue. It writes events into pages, potentially having multiple events in a page, or events spanning mulitple pages. A page size is 4KB by default.

Once all events in a page have been ACKed, the pages (in case of events spanning multiple pages) are returned to the free list immediately and have a high chance to be reused immediately.

Although the actual on-disk usage is low, the file can still grow, based on allocation patterns.

Or don't I have to code anything for this and it is done behind the scene?

All magic is behind the scenes.

Thanks Steffens,

I did quite some testing the last fews days. And most of the 'behind the scenes' work.
I can stop elasticsearch nodes. Let the events queue up. Restart elasticsearch and pending events are being send over.

But in production under heavy load. I get this error:

2019-05-10T18:28:10.029Z ERROR [publisher] spool/inbroker.go:544 Spool flush failed with: pq/writer-flush: txfile/tx-alloc-pages: file='/var/lib/statsdbeat/spool.dat' tx=0: transaction failed during commit: not enough memory to allocate 255 data page(s)

I'm using the default 4k page size. I have a large pre-allocated disk queue. And 4Gb of internal memory.

The error suggest an out of memory error. But it happens when it tries to commit the events to file. pq/writer_flush

The 4k page size is the file block allocation size. So making that smaller won't help
Is it internal memory. Or do I need a larger disk queue size. Or just the input stream is too high for the output stream to process?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.