I really need your help on this. I am trying to setup the internal Queue, so when my system is offline for some time and after connecting it back to the network, i could still receive a metricbeat data in Kibana, during the system was offline. I configured everything correct, here is the conf
but it can only provide data for only 30 minutes. For example if my system disconnected at 10 am, and i connect it back to the network at 12 pm, i can only only see the data up untill 10:30 am.
I ve also tried configure logstash, but samething there as well. Do you have any idea on how we can fix it ?
However, considering your circumstances and the fact that you're disconnected for a number of hours, I would recommend trying to have something between metricbeat and ES that's available for longer periods of time.
Thank you very much for replying me.
I have increased the size file to: 4096MiB and also set max_retries to 500, backoff.max and backoff.init to 120s, but looks like still same output.
What would you recommend to have between metricbeat and ES to help me increase time? I aslo checked the file path where the data while the system is off should go, but coundt find /spool.dat file. I checked the /var/lib/metricbeat and its empty
Anything that can remain up while ES is down that you can use as a "buffer" for extended periods of time.
Are you sure file spooling is enabled? If you check your metricbeat log, it should print a message warning you that it's on. Also, try commenting out all of the queue.mem config and just leaving queue.spool configured.
didn't see any messages that it's on. Do i have to do anything to enable spooling ? i configured it and restarted metricbeat service, should't it be enabled after that?
Just commented queue.mem, will try to disconnect again
still same 25 min. i disconnected it at 12pm and connected back at 1 pm, and it only restores data up until 12:25.
I noticed that even i i will comment all spooling configuration, it still remember those 25 minutes. I think queue.spool configuration not working at all.
Thank you, Alex I figured what was the problem. Apparently it was something with permissions.
The only question i have now is, the size of my /spool.dat file increased by 300MB and after connecting it back to the network, it didn't empty /spool.dat file. How can we set events to immediately remove from the queue?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.