Logstash 2.2.1 and 492s all over the place


I have a 6 node es cluster and trying to run two instances of logstash against it.
Both piping into stdin, hosts => set with all nodes, and with default 4 workers and they return me 492s instantly.

If I set workers to 1 they're good however getting poor indexing speed. And adding another two instances with single workers makes ES screaming again :slight_smile:

Any tips would be appreciated. I don't think previous pipeline was doing this bad?

Many thanks!

What is the specification and configuration of your cluster? What is your heap size? Is there anything in the Elasticsearch logs? What type of data are you indexing?

Hi @Christian_Dahlqvist

Thanks for quick response.

I'm using default config for the time being. Got 7 nodes, 8GB heap each for the time being

I've found by mistake my index templates were trying to allocate 333 shards per index. I've got this silly lag on my putty client and sometimes this mistake happens.

After correcting I could easily spawn 50 workers and had no issues.

I'm judging queue capacity overflow. I've pasted log excerpts to http://pastebin.com/raw/VBbYcDaY as they did not fit here.