Hi,
I am using logstash-elasticsearch filter plugin in logstash.
I request an entitiy-centric index by document_id which should be as fast as possible.
During processing I get a lot of error messages in logstash:
ELK + filebeat runnning on the same machine (single instance) during this test.
This is the error I got:
15:41:48.152 [[main]>worker6] WARN logstash.filters.elasticsearch - Failed to query elasticsearch for previous event {:index=>"entity-centric-ua-*", :query=>"_id: \"0x583b9fa8 0x0 0x407 0x0\"", :event=>2016-11-28T03:17:00.000Z xyzblabla, :error=>#<Elasticsearch::Transport::Transport::Error: Cannot get new connection from pool.>}
From 222k events 114 got the error above .
I think ES is overloaded. What do I need to tune?
During this test I only tested imported the data, no parallel access on kibana during processing, which is not real live. I guess it will get worse if I move it on production.
Any help is appreciated.
Versions:
kibana, elasticsearch, filebeat: 5.1.2
logstash: 5.2.2
Shards:
data index and entry-centric-index have both 5 primary shards, 1 replica. Entity-centric index is rotated per week, data index per day. But both have been empty before starting the test. TestData only contains a few hours.
Thanks, Andreas