I have elk installation:
logshash -> rmq -> logstash(2 instances) -> Elasticsearch
Elasticsearch cluster with 3 nodes. elasticsearch-5.2 (8 cores, 32 Gb, SSD) 12Gb HEAP
"number_of_replicas": 0
cluster.name: logs
node.name: el1,2,3
network.host: 0.0.0.0
transport.host: 172.30.30.180
http.port: 9200
indices.recovery.max_bytes_per_sec: 150mb
http.cors.enabled: true
http.cors.allow-origin: "*"
discovery.zen.ping.unicast.hosts: ["192.168.0.1", "192.168.0.2", "192.168.0.3" ]
discovery.zen.minimum_master_nodes: 2
action.destructive_requires_name: true
Logstash-5.2
pipeline.workers: 8
pipeline.batch.size: 500
-Xms4g
-Xmx4g
And my cluster can't consume more than 1.5 k events per sec from rmq.
My application produce more than 2.5k events per sec.
Almost all events are logs in json format so logstash config looks like this:
filter { json { source => "message" } }
I believe my cluster can work better. How cat I find a bottleneck? LA on severs about 0.2