"Too many open files" exception occurs

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose memory 

is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

While 270k is a lot it can happen obviously. Yet, I still wonder if you are
searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?

On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose 

memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi simonw,
I used default merge and I just indexing without searching.

在 2013年3月4日星期一UTC+9下午6时09分24秒,simonw写道:

While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?

On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose 

memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi, simon
Since the error message is "Too many open files", I doubt that the
number of open files while indexing is over the limit which set by Linux
OS.
I checked the Linux setting, and what I checked is here:

https://lh3.googleusercontent.com/-ILxAq5yH2Cw/UTWF89LqnjI/AAAAAAAAABE/7wWZ7vToV8Y/s1600/ulimit-a.PNG
How can I check the number of the open files which done by elasticsearch?

在 2013年3月4日星期一UTC+9下午6时09分24秒,simonw写道:

While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?

On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose 

memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

https://lh6.googleusercontent.com/-1esKX4XnO0I/UTWDoR1jB1I/AAAAAAAAAAw/iGQ9Ch1NYYk/s1600/ulimit-a.PNG

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

That shows 1024 open files max. That's very low. Don't be afraid setting
this to 100K even :slight_smile:

But that 270K files is crazy high indeed. Are you sure some of them are
not very, very old and are not really a part of the active index? If they
are indeed old, you could remove them.

SPM for ES can show you network connection stats. I believe open sockets
count as open files because they consume file descriptors, so this metric
is good to keep an eye on. See attached screenshot or look at
elasticsearch spm performance monitoring | otis | Flickr -- I circled what I mentioned
here.

Otis

ELASTICSEARCH Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service

On Tuesday, March 5, 2013 12:49:47 AM UTC-5, Hoony wrote:

Hi, simon
Since the error message is "Too many open files", I doubt that the
number of open files while indexing is over the limit which set by Linux
OS.
I checked the Linux setting, and what I checked is here:

https://lh3.googleusercontent.com/-ILxAq5yH2Cw/UTWF89LqnjI/AAAAAAAAABE/7wWZ7vToV8Y/s1600/ulimit-a.PNG
How can I check the number of the open files which done by
elasticsearch?

在 2013年3月4日星期一UTC+9下午6时09分24秒,simonw写道:

While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?

On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose 

memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

https://lh6.googleusercontent.com/-1esKX4XnO0I/UTWDoR1jB1I/AAAAAAAAAAw/iGQ9Ch1NYYk/s1600/ulimit-a.PNG

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Oh, and is it possible that you changed the merge factor to some high value?

Otis

ELASTICSEARCH Performance Monitoring - Sematext Monitoring | Infrastructure Monitoring Service

On Monday, March 4, 2013 3:26:41 AM UTC-5, Hoony wrote:

Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.

 My elasticsearch(version 20.5) is running on the computer whose 

memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.

 There are two shards folder, one of them I found 270,000 files in.

  I doubt that merge operation is invalidation , or .. may be the 

indexing rate is much higher than merge rate, but I am not sure...

  Are there some case or some opinion about it? 

Waiting for your answer.. :slight_smile:

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.