While 270k is a lot it can happen obviously. Yet, I still wonder if you are
searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?
On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:
Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.
My elasticsearch(version 20.5) is running on the computer whose
memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.
There are two shards folder, one of them I found 270,000 files in.
I doubt that merge operation is invalidation , or .. may be the
indexing rate is much higher than merge rate, but I am not sure...
Hi simonw,
I used default merge and I just indexing without searching.
在 2013年3月4日星期一UTC+9下午6时09分24秒,simonw写道:
While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?
On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:
Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.
My elasticsearch(version 20.5) is running on the computer whose
memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.
There are two shards folder, one of them I found 270,000 files in.
I doubt that merge operation is invalidation , or .. may be the
indexing rate is much higher than merge rate, but I am not sure...
Hi, simon
Since the error message is "Too many open files", I doubt that the
number of open files while indexing is over the limit which set by Linux
OS.
I checked the Linux setting, and what I checked is here:
While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?
On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:
Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.
My elasticsearch(version 20.5) is running on the computer whose
memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.
There are two shards folder, one of them I found 270,000 files in.
I doubt that merge operation is invalidation , or .. may be the
indexing rate is much higher than merge rate, but I am not sure...
That shows 1024 open files max. That's very low. Don't be afraid setting
this to 100K even
But that 270K files is crazy high indeed. Are you sure some of them are
not very, very old and are not really a part of the active index? If they
are indeed old, you could remove them.
SPM for ES can show you network connection stats. I believe open sockets
count as open files because they consume file descriptors, so this metric
is good to keep an eye on. See attached screenshot or look at elasticsearch spm performance monitoring | otis | Flickr -- I circled what I mentioned
here.
On Tuesday, March 5, 2013 12:49:47 AM UTC-5, Hoony wrote:
Hi, simon
Since the error message is "Too many open files", I doubt that the
number of open files while indexing is over the limit which set by Linux
OS.
I checked the Linux setting, and what I checked is here:
While 270k is a lot it can happen obviously. Yet, I still wonder if you
are searching on that cluster while indexing or if you have any merge
throtteling in place or if you prevented any merges by setting merge policy
paramters?
On Monday, March 4, 2013 9:26:41 AM UTC+1, Hoony wrote:
Hi all,
When indexing approximately 400 GB, "Too many open files" exception
occurred.
My elasticsearch(version 20.5) is running on the computer whose
memory is 16 GB, heap size is 8 GB.
I used one node cluster with two shard for indexing.
There are two shards folder, one of them I found 270,000 files in.
I doubt that merge operation is invalidation , or .. may be the
indexing rate is much higher than merge rate, but I am not sure...
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.