Hello, i see strange ES behavior on my test cluster. Cluster has one node and has only one index for logs that have 10 primary shards no replicas. Some tuning of this index:
"index.refresh_interval": "30s",
"index.translog.flush_threshold_size": "2gb",
"index.translog.durability": "async",
"index.number_of_replicas": "0"
I have 3 hosts with filebeat/logstash sending to this index.
When i send logs from all three hosts i can reach stable indexing rate around ~60K doc/sec.
BUT if i turn off 2 hosts and leave only one host , the indexing rate only ~20K doc/sec (i do first indexing of very big logfiles (hundreds of Gb), so rate can be much more than 60k even from one host )
Why? Where is the limit? Logs on all hosts are almost the same.
I try to split one host on 2 logstash -half of logs processed by one filebeat/logstash and other half by second filebeat/logstash, but index rate again 20K
Sounds like a network saturation, but 1Gbit network interface show <100Mbit traffic.
I'v tried different pipeline.batch.size - from 100 to 1000 - no difference.
When all 3 hosts work - i get very many 429 errors, but index rate anyway 60K. When 1 host works the number of 429 much less, but index rate small too - 20K
How to reach 60K from one host? Thanks.