I have an Elastic stack self-managed in the cloud with a Fleet Server on it.
I have a second environment on-premise with a really limited bandwidth and I want to have some Elastic Agents here also.
My first idea was to deploy a Fleet Server on-premise that will grab all flows from on-premise Elastic Agents and then forward those data to the Elastic stack in the cloud, but I can not find any parameter to delay Fleet Server communication to Elasticsearch in batches (like 1 batch every hour for example).
So my 2 questions are:
Is the Fleet Server capable of sending data in batch mode to Elasticsearch?
If not, can an enrolled Elastic Agent do it by itself?
I hadn't though about these parameters. I gave these a try even if this is more a workaround. bulk_max_size & backoff.max do not really help in my case, but it looks like compression_level helps a bit. CPU usage is currently not a bottleneck for me so I guess I can safely use this parameter for now. I will may be try to use it in conjunction with QoS at router level. That should be sufficient to preserve upload bandwidth.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.