I am new to Filebeat. In our environment, we have deployed an Elasticsearch cluster using ECK on the k8s cluster. k8s cluster has 1 master node and 3 worker nodes.
ELK cluster is deployed with 1 coordinating node, 3 data nodes, and 3 master nodes. Each data nodes have a PVC capacity of 500 GB each. Additionally, we also deployed 3 Filebeat pods running in each k8s worker node and 1 Logstash pod.
We have custom log generator scripts that generate fake Nginx logs with a throughput of 100MB per second. Those fake logs are generated in all 3 workers' nodes in the k8s cluster. A total of 900GB logs are generated so far in all 3 worker nodes.
Successfully Filebeat starts collecting logs from the log generator and ships to Logstash and eventually forwards logs to ELK and Kibana dashboard. There is only 1 index and 2 replicas and 1 shard. Since the shard size is 200+GB with 1 single shard, we have reindexed the existing index with 5 shards and 2 replicas.
What we would like to achieve is a throughput of 100MB per second from Filebeat. I mean filebeat read & write throughput is 100MB per second.
Here are a few queries on the above problem statements
- How do I find that Filebeat is writing 1 MB per second?
- Is it possible to achieve 100 MB per second read/write throughput in Filebeat? If so where do I make changes in filebeat config (CM) and DaemonsSet YAML files?
- In filebeat logs, we observe that read and write bytes are under linbeat section. Total Read bytes are very less compared to write bytes in the log file. Why so? How can I increase read bytes?