after replacing the .war file and restarting the server, I get the
following error: https://gist.github.com/1340424
I'm using v. 0.18.2 with aws-s3. I never got such error with previous
versions. The exception is thrown when the first user accesses the
servlet.
What's going on?
Not sure if v. 0.18.2 is using more file descriptor. We configured
our with 80,000 file descriptor. It was running fine with 0.17.x.
Upgraded to 0.18.2 5 days ago, and just got too open file error
today. Just bumped limit to 140,000.
after replacing the .war file and restarting the server, I get the
following error:Too many open files error when restarting server · GitHub
I'm using v. 0.18.2 with aws-s3. I never got such error with previous
versions. The exception is thrown when the first user accesses the
servlet.
What's going on?
0.18.x should not use more file descriptors than 0.17. Can you tell which
file descriptors are being used by the process (lsof -p), or, if anything
else changed except for the version bump?
Not sure if v. 0.18.2 is using more file descriptor. We configured
our with 80,000 file descriptor. It was running fine with 0.17.x.
Upgraded to 0.18.2 5 days ago, and just got too open file error
today. Just bumped limit to 140,000.
after replacing the .war file and restarting the server, I get the
following error:Too many open files error when restarting server · GitHub
I'm using v. 0.18.2 with aws-s3. I never got such error with previous
versions. The exception is thrown when the first user accesses the
servlet.
What's going on?
In our case, only configuration/application change is es version
change. However, we do have higher traffic load than we used to.
lsof shows almost all of the file descriptors used were on files in ES
data directory (nodes/0/indices). Here is outrcluster info:
0.18.x should not use more file descriptors than 0.17. Can you tell which
file descriptors are being used by the process (lsof -p), or, if anything
else changed except for the version bump?
Not sure if v. 0.18.2 is using more file descriptor. We configured
our with 80,000 file descriptor. It was running fine with 0.17.x.
Upgraded to 0.18.2 5 days ago, and just got too open file error
today. Just bumped limit to 140,000.
after replacing the .war file and restarting the server, I get the
following error:Too many open files error when restarting server · GitHub
I'm using v. 0.18.2 with aws-s3. I never got such error with previous
versions. The exception is thrown when the first user accesses the
servlet.
What's going on?
At the time the issue occurred, there were 79501 files in data
directory nodes/0/indices/*. It seems like there were too many
segment files got created.
In our case, only configuration/application change is es version
change. However, we do have higher traffic load than we used to.
lsof shows almost all of the file descriptors used were onfilesin ES
data directory (nodes/0/indices). Here is outrcluster info:
0.18.x should not use more file descriptors than 0.17. Can you tell which
file descriptors are being used by the process (lsof -p), or, if anything
else changed except for the version bump?
Not sure if v. 0.18.2 is using more file descriptor. We configured
our with 80,000 file descriptor. It was running fine with 0.17.x.
Upgraded to 0.18.2 5 days ago, and just gottooopen file error
today. Just bumped limit to 140,000.
after replacing the .war file and restarting the server, I get the
following error:Too many open files error when restarting server · GitHub
I'm using v. 0.18.2 with aws-s3. I never got such error with previous
versions. The exception is thrown when the first user accesses the
servlet.
What's going on?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.