I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
Next time it happens, can you double check which file descriptors are being
used (and gist the output of it)? What I mean, can you double check that
the open file descriptors are mainly coming from the data location ES
stores the files? It might be open sockets for example.
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I am having the same problem in the Elastic Search 0.19.1 version. Did
you do anything to fix it? I have another cluster doing the same thing
but the number of files are not as much as like this. The version I am
running there with 0.18.5. I am not sure if this a version problem.
but his seems big.
Let me know any finds.
Thank you!
-vibin
On Tue, Apr 17, 2012 at 1:55 PM, Shay Banon kimchy@gmail.com wrote:
Next time it happens, can you double check which file descriptors are being
used (and gist the output of it)? What I mean, can you double check that the
open file descriptors are mainly coming from the data location ES stores the
files? It might be open sockets for example.
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I am having the same problem in the Elastic Search 0.19.1 version. Did
you do anything to fix it? I have another cluster doing the same thing
but the number of files are not as much as like this. The version I am
running there with 0.18.5. I am not sure if this a version problem.
but his seems big.
Let me know any finds.
Thank you!
-vibin
On Tue, Apr 17, 2012 at 1:55 PM, Shay Banon kimchy@gmail.com wrote:
Next time it happens, can you double check which file descriptors are
being
used (and gist the output of it)? What I mean, can you double check that
the
open file descriptors are mainly coming from the data location ES stores
the
files? It might be open sockets for example.
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
I think it was mainly sockets indeed.
I haven't exeperienced the pb anymore since I limited the number of
concurrent open connections to my ES instance with an external queue
mechanism.
On Tue, Apr 17, 2012 at 10:25 AM, Shay Banon kimchy@gmail.com wrote:
Next time it happens, can you double check which file descriptors are
being used (and gist the output of it)? What I mean, can you double check
that the open file descriptors are mainly coming from the data location ES
stores the files? It might be open sockets for example.
I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.
The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough
Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.