Excessive Number of Open File Descriptor Count

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime Photo
Search

Does your number of shards (indices) grows? Or you just continue to index
into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu polu.stanislas@gmail.comwrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime
Photo Search

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to index
into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu polu.stanislas@gmail.comwrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the past
month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and has
now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime
Photo Search

I've restarted everything and I'm now at 1106 open file descriptors (lsof
-p | wc -l)

On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to index
into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu <polu.stanislas@gmail.com

wrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime
Photo Search

Next time it happens, can you double check which file descriptors are being
used (and gist the output of it)? What I mean, can you double check that
the open file descriptors are mainly coming from the data location ES
stores the files? It might be open sockets for example.

On Mon, Apr 16, 2012 at 10:33 AM, Stanislas Polu
polu.stanislas@gmail.comwrote:

I've restarted everything and I'm now at 1106 open file descriptors (lsof
-p | wc -l)

On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to
index into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu <
polu.stanislas@gmail.com> wrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | **Realtime
Photo Search

Hi Plou/Shay,

I am having the same problem in the Elastic Search 0.19.1 version. Did
you do anything to fix it? I have another cluster doing the same thing
but the number of files are not as much as like this. The version I am
running there with 0.18.5. I am not sure if this a version problem.
but his seems big.

Let me know any finds.

Thank you!

-vibin

On Tue, Apr 17, 2012 at 1:55 PM, Shay Banon kimchy@gmail.com wrote:

Next time it happens, can you double check which file descriptors are being
used (and gist the output of it)? What I mean, can you double check that the
open file descriptors are mainly coming from the data location ES stores the
files? It might be open sockets for example.

On Mon, Apr 16, 2012 at 10:33 AM, Stanislas Polu polu.stanislas@gmail.com
wrote:

I've restarted everything and I'm now at 1106 open file descriptors (lsof
-p | wc -l)

On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to
index into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu
polu.stanislas@gmail.com wrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime
Photo Search

--
Regards,

  • vibindhas

Can you gist the list of open file handles you have? The result of lsof -p
basically.

On Fri, Apr 27, 2012 at 5:50 PM, vibin dhas vibindhas@gmail.com wrote:

Hi Plou/Shay,

I am having the same problem in the Elastic Search 0.19.1 version. Did
you do anything to fix it? I have another cluster doing the same thing
but the number of files are not as much as like this. The version I am
running there with 0.18.5. I am not sure if this a version problem.
but his seems big.

Let me know any finds.

Thank you!

-vibin

On Tue, Apr 17, 2012 at 1:55 PM, Shay Banon kimchy@gmail.com wrote:

Next time it happens, can you double check which file descriptors are
being
used (and gist the output of it)? What I mean, can you double check that
the
open file descriptors are mainly coming from the data location ES stores
the
files? It might be open sockets for example.

On Mon, Apr 16, 2012 at 10:33 AM, Stanislas Polu <
polu.stanislas@gmail.com>
wrote:

I've restarted everything and I'm now at 1106 open file descriptors
(lsof
-p | wc -l)

On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to
index into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu
polu.stanislas@gmail.com wrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime
Photo Search

--
Regards,

  • vibindhas

Sorry for the latency here.

I think it was mainly sockets indeed.
I haven't exeperienced the pb anymore since I limited the number of
concurrent open connections to my ES instance with an external queue
mechanism.

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | Realtime Photo
Search

On Tue, Apr 17, 2012 at 10:25 AM, Shay Banon kimchy@gmail.com wrote:

Next time it happens, can you double check which file descriptors are
being used (and gist the output of it)? What I mean, can you double check
that the open file descriptors are mainly coming from the data location ES
stores the files? It might be open sockets for example.

On Mon, Apr 16, 2012 at 10:33 AM, Stanislas Polu <polu.stanislas@gmail.com

wrote:

I've restarted everything and I'm now at 1106 open file descriptors (lsof
-p | wc -l)

On Monday, April 16, 2012 9:21:35 AM UTC+2, Stanislas Polu wrote:

Nope posting to the same index.

On Monday, April 16, 2012 9:13:18 AM UTC+2, kimchy wrote:

Does your number of shards (indices) grows? Or you just continue to
index into the same index more and more data?

On Mon, Apr 16, 2012 at 9:16 AM, Stanislas Polu <
polu.stanislas@gmail.com> wrote:

Hi everyone,

I run a cluster of 3 instances with local storage.
I'm indexing ~60 elements per seconds and have been doing so for the
past month.
The number of concurrent queries is fairly low at the moment.

The number of file descriptors needed has been constantly growing and
has now grown to a point where it feels like a real issue
my current limit is 946000 and it's still not enough

Is there anything I should do?
Should I attempt to grant unlimited number of file descriptors?

Thanks for your help!

Best,

-stan

--
Stanislas Polu
Mo: +33 6 83 71 90 04 | Tw: @spolu | http://teleportd.com | **Realtime
Photo Search