All,
I'm aware of the known issue with the limit of file descriptors, so
when I first got this issue I upped the limit. I kept getting the
exception, so I kept upping it. As an example, here is what ulimit -a
returns:
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 20
file size (blocks, -f) unlimited
pending signals (-i) 16382
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 100000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I've even tried cranking it up to 300K, and I still get the following
error:
- Error injecting constructor, java.io.IOException: directory '/opt/
elasticsearch-0.15.2/data/elasticsearch/nodes/0/indices/account_26/0/
index' exists and is a directory, but cannot be listed: list()
returned null
at
org.elasticsearch.index.store.fs.NioFsStore.(NioFsStore.java:50)
while locating org.elasticsearch.index.store.fs.NioFsStore
at
org.elasticsearch.index.store.StoreModule.configure(StoreModule.java:
while locating org.elasticsearch.index.store.Store
for parameter 3 at
org.elasticsearch.index.shard.service.InternalIndexShard.(InternalIndexShard.java:
108)
while locating
org.elasticsearch.index.shard.service.InternalIndexShard
at
org.elasticsearch.index.shard.IndexShardModule.configure(IndexShardModule.java:
39)
while locating org.elasticsearch.index.shard.service.IndexShard
for parameter 3 at
org.elasticsearch.index.gateway.IndexShardGatewayService.(IndexShardGatewayService.java:
74)
at
org.elasticsearch.index.gateway.IndexShardGatewayModule.configure(IndexShardGatewayModule.java:
40)
while locating
org.elasticsearch.index.gateway.IndexShardGatewayService
Or sometimes the 'too many open files exception'. Once this happens,
the cluster is dead. I have to stop the process, delete the data
directory, and restart it. Once I try indexing again, I get the same
error, at the same record count. This is only about 80K records, with
a small fraction of the number of fields I will likely eventually
need, so it seems like it should be fine. Also, lsof | wc - l is
showing a reasonable number (less than 10k) so I'm at a loss.
What's even more weird is that when I run elasticsearch as a local
node 'inter process' (In the same jvm rather than starting it in a
separate jvm) I am able to index the same number of records without
any issues. I'm using Ubuntu, is there some kind of limit somewhere
else I'm missing? I'm at a bit of a loss.
Thanks in advance,
Lucas