I have a problem where I am getting some logs along with ID , I have two es Index (archive_index , and latest_logs) . so each time I have to check whether the current ID is same as previous ID if yes I will send the logs to my latest_logs , if not all the logs which are in latest_logs will go archive into archive index.
I thought archiving the logs can not be done using logstash. (Because for archiving first I have to check the ID , AND then getting all the records from archive_index then push them on latest_logs and then delete all the records from latest_logs for getting only new ID logs).
So I am correctly doing like this
logstash -> file -> java -> es
But the problem is when I am writing all the logs into file , file size is getting increase very rapidly (some G.B in few days).
SO I want the alternative solution for that .
- Is it possible from logstash I will feed my logs to kafta then kafta to java ? (if yes how will kafta ensure that after this much amount of data previous data should flush (if java has already got that )).