Translog exploding

Translog files (.tlog) files are becoming very big and server runs out of disk space quite quickly.

Stored data is exclusively loaded via Logstash, which pulls it from Mysql.

Considering my problem is disk space, my related questions are:

  1. Any best practices, in terms of Logstash config (last_run etc'), for working with data that pulled exclusively from Mysql server every 30 minutes with many records inserted into hundreds of indices?

  2. How can I avoid having such big .tlog files. Should I set somewhere in logstash.conf file, a translog configuration parameters?
    If it's about flushing so, is it possible or should I flush every once in a while or only once it reaches some size threshold?

Thanks.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.

I'm not sure what is being asked here. Logstash doesn't have a translog, where are these files with .tlog extension coming from?

many records inserted into hundreds of indices?

You mean elasticsearch indices? If so, as long as each event has enough data to know to which index it belongs to, then the index setting can be used to decide, per event, the destination index.
That said, I'm not sure what your current architecture is, but hearing "hundreds of indices" makes me wonder if data could be modelled in a different way.

Should I set somewhere in logstash.conf file, a translog configuration parameters?

again, logstash doesn't create tlog files, not sure where this is coming from