Oh, sorry, maybe I wasn't that clear on which side the "Too many open
files" appears; this is happening on the Elasticsearch Engine, not the
client, the client is hanging, not throwing any kind of exception ...
Thanks,
Alin
On Oct 24, 3:27 pm, Gautam Shyamantak gau...@datarpm.com wrote:
Check if you are closing the Client after using it or not.
On Mon, Oct 24, 2011 at 5:39 PM, Alin Popa alin.p...@gmail.com wrote:
Hi guys,
While working with Elasticsearch, we've noticed an issue regarding the
Java client when Elasticsearch Engine is crashing with "Too many open
files":
- I know, installation instructions suggests that we need to increase
the default number of opened files descriptors (http://
Elasticsearch Platform — Find real-time answers at scale | Elastic), but
let's say that I forgot to do that;- after a while, in the console of elasticsearch engine, I can see the
follow stack trace which is pretty obvious:[2011-10-24 14:53:46,582][WARN ]
[netty.channel.socket.nio.NioServerSocketPipelineSink] Failed to
accept a connection.
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:
152)
atorg.elasticsearch.common.netty.channel.socket.nio.NioServerSocketPipelineSi nk
$Boss.run(NioServerSocketPipelineSink.java:244)
atorg.elasticsearch.common.netty.util.ThreadRenamingRunnable.run(ThreadRenami ngRunnable.java:
108)
at org.elasticsearch.common.netty.util.internal.DeadLockProofWorker
$1.run(DeadLockProofWorker.java:44)
at java.util.concurrent.ThreadPoolExecutor
$Worker.runTask(ThreadPoolExecutor.java:886)
at java.util.concurrent.ThreadPoolExecutor
$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
- from this moment on, the elasticsearch server is down and everytime
that I try to connect using the TransportClient (connecting to port
9300), it hangs for ever;- I've tried setting the following options to the netty transport:
network.tcp.connect_timeout, connect_timeout and
transport.tcp.connect_timeout, each to value "3s" (as I've seen here:https://github.com/elasticsearch/elasticsearch/blob/master/modules/el...
)
but no success.So, my questions are:
- Am I setting the right timeout flag to the client ? If so, doesn't
work as expected ?- Am I looking into the wrong place ? Maybe I'm missing some other
setting that is used for exactly this purpose ?Thanks,
Alin