Difference in filebeat + ES performance


I currently have 2 setups:

  1. Filebeat v8.3.3 + ES v8.3.3
    • different physical servers connected to the same subnet
    • 3-node ES (each configured as master + data)
    • Total of 90GB JVM heap
    • Total of 18TB hard disk space (running on SSD)
  2. Filebeat v8.3.3 (same as above) + ES v8.8.0
    • ES is running on Kubernetes pods, connected to different subnet from the Filebeat servers.
    • 3 master nodes + 6 data nodes
    • Total of 279GB JVM heap
    • Total of 90TB hard disk space (running on SSD)

Specs-wise, Setup 2 running on Kubes should be better than my small-scale Setup 1. However, I'm seeing more packet drops (as reported in the Filebeat logs) in Setup 2. The indexing rate is similar on both setups, but I'm experiencing ~8% higher packet drop in Setup 2 when I'm running 1 filebeat, and ~16% higher packet drop when running 2 filebeats. In fact, when I'm running 2 filebeat instances, I'm getting ~60% packet drop.

I have 3 questions.

  1. How is it possible that the indexing rate is similar, but the packet drop rate is higher? Shouldn't the indexing rate drop too when the packet drop rate is higher?
  2. Where is the bottleneck here? Given that my filebeat setup is the same (same servers, pointing to different ES), is ES the bottleneck here?
  3. I'm not sure why Setup 2 has worse performance. Could it be the virtualization, or the fact that the packets are going to a different subnet?

Thank you.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.