Sorry for the latency here.
I think it was mainly sockets indeed.
I haven't exeperienced the pb anymore since I limited the number of
concurrent open connections to my ES instance with an external queue
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime Photo
On Tue, Apr 17, 2012 at 10:25 AM, Shay Banon email@example.com wrote:
Next time it happens, can you double check which file descriptors are
being used (and gist the output of it)? What I mean, can you double check
that the open file descriptors are mainly coming from the data location ES
stores the files? It might be open sockets for example.
On Mon, Apr 16, 2012 at 10:33 AM, Stanislas Polu <firstname.lastname@example.org
I've restarted everything and I'm now at 1106 open file descriptors (lsof
-p | wc -l)
On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:
Nope posting to the same index.
On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:
Does your number of shards (indices) grows? Or you just continue to
index into the same index more and more data?
On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu <
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
Thanks for your help!
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | **Realtime