Too Many Open Files

Hi,

I am using version 0.19.3 I have the nofile limit set to 128K and am
getting errors like

[2014-01-18 06:52:54,857][WARN
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to
initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to create a
selector.
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:154)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.register(AbstractNioWorker.java:131)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:269)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:231)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:152)
... 7 more

I am aware that version 0.19.3 is old. We have been having trouble getting
our infrastructure group to build out new nodes so we can have a rolling
upgrade with testing for both versions going on. I am now setting the
limit to 1048576 as per http://stackoverflow.com/questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
however, I'm concerned this may cause other issues.

If anyone has any suggestions I'd love to hear them. I am using this as
fuel for the "please pay attention and get us the support we need so we can
upgrade" campaign.

--Shannon Monasco

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/d76b08e4-d9d2-407e-8443-cb654f381c9a%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sorry wrong error message.

[2014-01-18 06:47:06,232][WARN
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a
connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

The other posted error message is newer and seems to follow the too many
open files error message.

--Shannon Monasco

On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

Hi,

I am using version 0.19.3 I have the nofile limit set to 128K and am
getting errors like

[2014-01-18 06:52:54,857][WARN
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to
initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to create
a selector.
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:154)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.register(AbstractNioWorker.java:131)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.registerAcceptedChannel(NioServerSocketPipelineSink.java:269)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:231)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at
sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at
org.elasticsearch.common.netty.channel.socket.nio.AbstractNioWorker.start(AbstractNioWorker.java:152)
... 7 more

I am aware that version 0.19.3 is old. We have been having trouble
getting our infrastructure group to build out new nodes so we can have a
rolling upgrade with testing for both versions going on. I am now setting
the limit to 1048576 as per
kernel - On Linux - set maximum open files to unlimited. Possible? - Stack Overflow,
however, I'm concerned this may cause other issues.

If anyone has any suggestions I'd love to hear them. I am using this as
fuel for the "please pay attention and get us the support we need so we can
upgrade" campaign.

--Shannon Monasco

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

The first thing to do is check if your limits are actually being persisted
and used. The elasticsearch site has a good writeup:

Second, it might be possible that you are reaching the 128k limit. How many
shards per node do you have? Do you have non standard merge settings? You
can use the status API to find out how many open files you have. I do not
have a link since it might have changed since 0.19.

Also, be aware that it is not possible to do rolling upgrades with nodes
have different major versions of elasticsearch. The underlying data will be
fine and does not need to be upgraded, but nodes will not be able to
communicate with each other.

Cheers,

Ivan

On Tue, Jan 21, 2014 at 7:42 AM, smonasco smonasco@gmail.com wrote:

Sorry wrong error message.

[2014-01-18 06:47:06,232][WARN
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a
connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

The other posted error message is newer and seems to follow the too many
open files error message.

--Shannon Monasco

On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

Hi,

I am using version 0.19.3 I have the nofile limit set to 128K and am
getting errors like

[2014-01-18 06:52:54,857][WARN ][netty.channel.socket.nio.NioServerSocketPipelineSink]
Failed to initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to
create a selector.
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.start(AbstractNioWorker.java:154)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.register(AbstractNioWorker.java:131)
at org.elasticsearch.common.netty.channel.socket.nio.
NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
NioServerSocketPipelineSink.java:269)
at org.elasticsearch.common.netty.channel.socket.nio.
NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
java:231)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:65)
at sun.nio.ch.EPollSelectorProvider.openSelector(
EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.start(AbstractNioWorker.java:152)
... 7 more

I am aware that version 0.19.3 is old. We have been having trouble
getting our infrastructure group to build out new nodes so we can have a
rolling upgrade with testing for both versions going on. I am now setting
the limit to 1048576 as per http://stackoverflow.com/
questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
however, I'm concerned this may cause other issues.

If anyone has any suggestions I'd love to hear them. I am using this as
fuel for the "please pay attention and get us the support we need so we can
upgrade" campaign.

--Shannon Monasco

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CALY%3DcQDXxLy2Ovm7jMUCREc7Jkva3Hwn3cesu%2BqccGgi%2BP0p_w%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Sorry to have taken so long to reply. So I went ahead and followed your
link. I'd been there before, but decided to give it a deeper look. I
found actually, however, that bigdesk told me how many max open files the
process was using and from there I was able to determine that my settings
in limits.conf was not being honored even though if I switched to the
context Elasticsearch was running under I would get the appropriate limits.

I then dug into the service script and found someone dropped a ulimit
statement into the script that was overwriting the limits.conf setting.

Thank you,
Shannon Monasco

On Wednesday, January 22, 2014 10:09:42 AM UTC-7, Ivan Brusic wrote:

The first thing to do is check if your limits are actually being persisted
and used. The elasticsearch site has a good writeup:
Elasticsearch Platform — Find real-time answers at scale | Elastic

Second, it might be possible that you are reaching the 128k limit. How
many shards per node do you have? Do you have non standard merge settings?
You can use the status API to find out how many open files you have. I do
not have a link since it might have changed since 0.19.

Also, be aware that it is not possible to do rolling upgrades with nodes
have different major versions of elasticsearch. The underlying data will be
fine and does not need to be upgraded, but nodes will not be able to
communicate with each other.

Cheers,

Ivan

On Tue, Jan 21, 2014 at 7:42 AM, smonasco <smon...@gmail.com <javascript:>

wrote:

Sorry wrong error message.

[2014-01-18 06:47:06,232][WARN
][netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to accept a
connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:226)
at
org.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.java:227)
at
org.elasticsearch.common.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)

The other posted error message is newer and seems to follow the too many
open files error message.

--Shannon Monasco

On Tuesday, January 21, 2014 8:35:18 AM UTC-7, smonasco wrote:

Hi,

I am using version 0.19.3 I have the nofile limit set to 128K and am
getting errors like

[2014-01-18 06:52:54,857][WARN ][netty.channel.socket.nio.NioServerSocketPipelineSink]
Failed to initialize an accepted socket.
org.elasticsearch.common.netty.channel.ChannelException: Failed to
create a selector.
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.start(AbstractNioWorker.java:154)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.register(AbstractNioWorker.java:131)
at org.elasticsearch.common.netty.channel.socket.nio.
NioServerSocketPipelineSink$Boss.registerAcceptedChannel(
NioServerSocketPipelineSink.java:269)
at org.elasticsearch.common.netty.channel.socket.nio.
NioServerSocketPipelineSink$Boss.run(NioServerSocketPipelineSink.
java:231)
at org.elasticsearch.common.netty.util.internal.
DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)
at java.util.concurrent.ThreadPoolExecutor.runWorker(
ThreadPoolExecutor.java:1110)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(
ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
Caused by: java.io.IOException: Too many open files
at sun.nio.ch.IOUtil.makePipe(Native Method)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:
65)
at sun.nio.ch.EPollSelectorProvider.openSelector(
EPollSelectorProvider.java:36)
at java.nio.channels.Selector.open(Selector.java:227)
at org.elasticsearch.common.netty.channel.socket.nio.
AbstractNioWorker.start(AbstractNioWorker.java:152)
... 7 more

I am aware that version 0.19.3 is old. We have been having trouble
getting our infrastructure group to build out new nodes so we can have a
rolling upgrade with testing for both versions going on. I am now setting
the limit to 1048576 as per http://stackoverflow.com/
questions/1212925/on-linux-set-maximum-open-files-to-unlimited-possible,
however, I'm concerned this may cause other issues.

If anyone has any suggestions I'd love to hear them. I am using this as
fuel for the "please pay attention and get us the support we need so we can
upgrade" campaign.

--Shannon Monasco

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/6c77b1f9-7838-40f9-b7e2-6006370265a5%40googlegroups.com
.

For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/5da7c5f2-90ce-46ce-abc8-36900fb6b006%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.