Can we delay output to Elasticsearch in batch?


I have an Elastic stack self-managed in the cloud with a Fleet Server on it.
I have a second environment on-premise with a really limited bandwidth and I want to have some Elastic Agents here also.

My first idea was to deploy a Fleet Server on-premise that will grab all flows from on-premise Elastic Agents and then forward those data to the Elastic stack in the cloud, but I can not find any parameter to delay Fleet Server communication to Elasticsearch in batches (like 1 batch every hour for example).

So my 2 questions are:

  • Is the Fleet Server capable of sending data in batch mode to Elasticsearch?
  • If not, can an enrolled Elastic Agent do it by itself?

Thank you in advance!

Check documentation and this. Test how will be if you change params:

  • bulk_max_size
  • increase compression_level
  • backoff.max

Thank you for your answer.

I hadn't though about these parameters. I gave these a try even if this is more a workaround.
bulk_max_size & backoff.max do not really help in my case, but it looks like compression_level helps a bit. CPU usage is currently not a bottleneck for me so I guess I can safely use this parameter for now. I will may be try to use it in conjunction with QoS at router level. That should be sufficient to preserve upload bandwidth.

FB has more tuning settings, if is feasible, test it with speed params. Check doc.
Also if is possible, do not use HTTPS. HTTP is simpler.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.