Question about index optimize

This page
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/indices-optimize.html
talks about merging segments.

My curiosity/interest in this process grew when, after an overnight
data import, the index went down with Too Many Open Files.

Sure, I was able to find some instructions about telling nix to set
max files to 32000. Question is this: really, how many open files does
ES keep when importing monster loads?

Would it make sense to do an optimize following a monster load and
before further work? If so, from a Java client, how do you do that?

Thanks in advance.
Jack

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAH6s0fwN%2BXmrBwpCx6fqLAQB3bLJgBwW%3Dv6oHjZ9ZRXdNmnc5Q%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

After a rather amazing run, I got the "too many open files" report again.

The platform is a commodity pc with 8gb ram, 1tb hard disk, running a
slightly out of date Ubuntu.

I added lines to /etc/security/limits.conf; something about max and
min 32000 and booted with ./elasticsearch -f -Xmx4g -Xms2g
-Des.index.store.type=niofs -Des.max-open-files=true

and still got the crash.
I was running the same program on a different platform and, in about
the same level of importing, that one blew out with No Node Available;
looking at the nix console there says No route to host (never mind
that it had been running fine on a local gigabit network with no
outside interference.

I am still interested whether there is some background "too many open
files" going on.

Thanks in advance for ideas.

Cheers
Jack

Log trace below:
Exception in thread "Thread-16" Exception in thread "Thread-2098" org.elasticsea
rch.index.engine.IndexFailedEngineException: [vertices][4] Index failed for [cor
e#42641.359576]
at org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.ja
va:497)
at org.elasticsearch.index.shard.service.InternalIndexShard.index(Intern
alIndexShard.java:386)
at org.elasticsearch.action.index.TransportIndexAction.shardOperationOnP
rimary(TransportIndexAction.java:212)
at org.elasticsearch.action.support.replication.TransportShardReplicatio
nOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplic
ationOperationAction.java:556)
at org.elasticsearch.action.support.replication.TransportShardReplicatio
nOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperat
ionAction.java:426)
at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:886)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.FileNotFoundException: /usr/local/lib/elasticsearch-0.90.9/da
ta/elasticsearch/nodes/0/indices/vertices/4/index/_k7z.fdt (Too many open files)

On Fri, Dec 27, 2013 at 5:19 PM, Jack Park jackpark@topicquests.org wrote:

This page
Elasticsearch Platform — Find real-time answers at scale | Elastic
talks about merging segments.

My curiosity/interest in this process grew when, after an overnight
data import, the index went down with Too Many Open Files.

Sure, I was able to find some instructions about telling nix to set
max files to 32000. Question is this: really, how many open files does
ES keep when importing monster loads?

Would it make sense to do an optimize following a monster load and
before further work? If so, from a Java client, how do you do that?

Thanks in advance.
Jack

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAH6s0fx32AV0%2B-KEoct251y%3DRRT6Hb2iXqZXdzSf77bvy%3DPxxQ%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hey,

can you make sure, that your setting regarding open files is actually
applied, by using the nodes info API. See

curl -XGET 'http://localhost:9200/_nodes?process'

check the max_file_descriptors parameter.

You can also check http://localhost:9200/_cluster/stats?process for
currently open file descriptors...

--Alex

On Sat, Dec 28, 2013 at 4:25 AM, Jack Park jackpark@topicquests.org wrote:

After a rather amazing run, I got the "too many open files" report again.

The platform is a commodity pc with 8gb ram, 1tb hard disk, running a
slightly out of date Ubuntu.

I added lines to /etc/security/limits.conf; something about max and
min 32000 and booted with ./elasticsearch -f -Xmx4g -Xms2g
-Des.index.store.type=niofs -Des.max-open-files=true

and still got the crash.
I was running the same program on a different platform and, in about
the same level of importing, that one blew out with No Node Available;
looking at the nix console there says No route to host (never mind
that it had been running fine on a local gigabit network with no
outside interference.

I am still interested whether there is some background "too many open
files" going on.

Thanks in advance for ideas.

Cheers
Jack

Log trace below:
Exception in thread "Thread-16" Exception in thread "Thread-2098"
org.elasticsea
rch.index.engine.IndexFailedEngineException: [vertices][4] Index failed
for [cor
e#42641.359576]
at
org.elasticsearch.index.engine.robin.RobinEngine.index(RobinEngine.ja
va:497)
at
org.elasticsearch.index.shard.service.InternalIndexShard.index(Intern
alIndexShard.java:386)
at
org.elasticsearch.action.index.TransportIndexAction.shardOperationOnP
rimary(TransportIndexAction.java:212)
at
org.elasticsearch.action.support.replication.TransportShardReplicatio

nOperationAction$AsyncShardOperationAction.performOnPrimary(TransportShardReplic
ationOperationAction.java:556)
at
org.elasticsearch.action.support.replication.TransportShardReplicatio

nOperationAction$AsyncShardOperationAction$1.run(TransportShardReplicationOperat
ionAction.java:426)
at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExec
utor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:908)
at java.lang.Thread.run(Thread.java:662)
Caused by: java.io.FileNotFoundException:
/usr/local/lib/elasticsearch-0.90.9/da
ta/elasticsearch/nodes/0/indices/vertices/4/index/_k7z.fdt (Too many open
files)

On Fri, Dec 27, 2013 at 5:19 PM, Jack Park jackpark@topicquests.org
wrote:

This page

Elasticsearch Platform — Find real-time answers at scale | Elastic

talks about merging segments.

My curiosity/interest in this process grew when, after an overnight
data import, the index went down with Too Many Open Files.

Sure, I was able to find some instructions about telling nix to set
max files to 32000. Question is this: really, how many open files does
ES keep when importing monster loads?

Would it make sense to do an optimize following a monster load and
before further work? If so, from a Java client, how do you do that?

Thanks in advance.
Jack

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/elasticsearch/CAH6s0fx32AV0%2B-KEoct251y%3DRRT6Hb2iXqZXdzSf77bvy%3DPxxQ%40mail.gmail.com
.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/elasticsearch/CAGCwEM-x3Q8ZWVqsS99bhy6Mrq%2BPFpSFZ-qQAQ6tqd_UmVD9mg%40mail.gmail.com.
For more options, visit https://groups.google.com/groups/opt_out.