Cannot get log from RabbitMQ (get log very slow)

Hi, i'm ussing lastet verion ES, LS.
I'm ussing Brocker RabbitMQ : LS Shipper -> RabbitMQ -> LS Indexer -> ES

Now I run, i check on RabbitMQ, i have 2M doc/event queue (Incomming 6-10k event/doc per second) and ack very small ( 20 doc/event per second -> 100 per second)

I try to restart LS Indexer , Ack increare 4k /s, after that down to < 100 ack/s and stop get doc/event.

I run LS indexer with --debug, i see log

Log Error

Here are all my config LS

LS Shipper
LS Indexer

Config for RabbitMQ Server (Config file)

And i have many filter ( filter log nginx, apache, SSH, Mail exchange, Winlogbeat, Network (router, swicth, firewall)

I set LS-Heap-size on /etc/sysconfig/logstash : ( I have 2 LS Indexer and 16Gb RAM/ LS)

LS_HEAP_SIZE="10g"

So, what is my config wrong?

Hi, i'm ussing lastet verion ES, LS.

It's better if you're explicit about what versions your run.

I run LS indexer with --debug, i see log

That's just a single log message so it's not very useful.

You need to simplify your setup to narrow things down. What if you replace the elasticsearch output with a simple file output that just flushes events to a local file? What if you disable all the filters?

I'm using ES 2.2 nad logstash 2.2.2 ( I updated from ES 2.1 and logstash 2.1 yesterday)

I try to set output to file, and i got same result (ack 200 -300 doc/event per second).

I remove all filter, ack increase to 1,2 - 2K doc/event per second . But it's not good

I run logstash with command

/opt/logstash/bin/logstash -f /etc/logstash/conf.d/ --debug

And i see logstash create connections to RabbitMQ slow, after that, add many patterns , while there are many things I'm not used in my filter

Hi, i found problem (with me)
I update LS from version 2.1 to 2.2 and all plugin installed.

I try to remove all plugin don't use. and ack increse to 3k doc/event .

Thanks!

Woaaaa "threads => 400" for RabbitMQ, and 12 workers for elasticsearch, is not too much?

On small machines (4CPU 4Gb RAM, logstash+rabbitmq+ES) I achieve easily to index 2K/s messages.

Try to reduce these numbers (4 threads, 4 workers), play with logstash CLI options such as pipline-worker, batch-size. Also, ES indexation could be the bottleneck, did you:

  • Set refresh_interval to something like 5s
  • Optimize your mapping/analyzers
  • Use bigdesk to see how many documents are indexed
  • Watch log files to see if any index errors occured
1 Like

Can you suggest for me with 8vCPU and 16Gb ram ? :slight_smile:

I decrease to 20 threads and 8 worker :slight_smile:

When you do a sizing, the best approach is to start with low values, monitor your metrics than increase values, monitor etc... until you find the best values.

Try with everything at 1, check how many consumers are on RabbitMQ web ui, and how many documents are indexed by second (ES plugin bigdesk), check also ES bulk thread queue. Also watch your CPU/RAM.

Test it for a queue of 100.000.

Then increase values to 2, do the same thing, then 4 etc..

You should see that performance is not linear, and find a good value.

Also, you should test ES indexing performance separately, a big mapping or wrong settings could be bottleneck.

1 Like