Hi, i'm ussing lastet verion ES, LS.
I'm ussing Brocker RabbitMQ : LS Shipper -> RabbitMQ -> LS Indexer -> ES
Now I run, i check on RabbitMQ, i have 2M doc/event queue (Incomming 6-10k event/doc per second) and ack very small ( 20 doc/event per second -> 100 per second)
I try to restart LS Indexer , Ack increare 4k /s, after that down to < 100 ack/s and stop get doc/event.
It's better if you're explicit about what versions your run.
I run LS indexer with --debug, i see log
That's just a single log message so it's not very useful.
You need to simplify your setup to narrow things down. What if you replace the elasticsearch output with a simple file output that just flushes events to a local file? What if you disable all the filters?
Woaaaa "threads => 400" for RabbitMQ, and 12 workers for elasticsearch, is not too much?
On small machines (4CPU 4Gb RAM, logstash+rabbitmq+ES) I achieve easily to index 2K/s messages.
Try to reduce these numbers (4 threads, 4 workers), play with logstash CLI options such as pipline-worker, batch-size. Also, ES indexation could be the bottleneck, did you:
Set refresh_interval to something like 5s
Optimize your mapping/analyzers
Use bigdesk to see how many documents are indexed
Watch log files to see if any index errors occured
When you do a sizing, the best approach is to start with low values, monitor your metrics than increase values, monitor etc... until you find the best values.
Try with everything at 1, check how many consumers are on RabbitMQ web ui, and how many documents are indexed by second (ES plugin bigdesk), check also ES bulk thread queue. Also watch your CPU/RAM.
Test it for a queue of 100.000.
Then increase values to 2, do the same thing, then 4 etc..
You should see that performance is not linear, and find a good value.
Also, you should test ES indexing performance separately, a big mapping or wrong settings could be bottleneck.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.