Max number of shards per node


(@mromagnoli) #1

I am not realizing how to get the default number of shards per node or the
maximun one, because I think I am having 'red' status since I have more
than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jordan Sissel-2) #2

On Friday, November 15, 2013 8:07:45 AM UTC-8, @mromagnoli wrote:

I am not realizing how to get the default number of shards per node or the
maximun one, because I think I am having 'red' status since I have more
than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

Sorry you're having issues! To the best of my knowledge, there's no maximum
number of shards for a single server; if anything, it would be limited by
memory available.

Red means some shards can't be loaded, but this can be caused by corrupt or
missing shard data. Do you have any elasticsearch logs that might help here?

-Jordan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(Jordan Sissel-2) #3

On Friday, November 15, 2013 10:07:18 AM UTC-8, Jordan Sissel wrote:

On Friday, November 15, 2013 8:07:45 AM UTC-8, @mromagnoli wrote:

I am not realizing how to get the default number of shards per node or
the maximun one, because I think I am having 'red' status since I have more
than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

Sorry you're having issues! To the best of my knowledge, there's no
maximum number of shards for a single server; if anything, it would be
limited by memory available.

Red means some shards can't be loaded, but this can be caused by corrupt
or missing shard data. Do you have any elasticsearch logs that might help
here?

-Jordan

One additional note I forgot to mention - file count limits could be a
cause of your problems here.

Pretty much all linux distros ship with a default 'open file' limit of
1024. This is a really bad default for elasticsearch, especially if you
have many shards. You can see file-count related errors in ES's logs for
things like "Failed to create shard" for example:

Failed to create shard, message [IndexShardCreationException[[exampl
e][1691] failed to create shard]; nested: IOException[directory
'/mnt/btrfs/data/ela
sticsearch/nodes/0/indices/example/1691/index' exists and is a directory,
but cannot
be listed: list() returned null]; ]]

You can find more details about checking this file count limit here:
http://www.elasticsearch.org/tutorials/too-many-open-files/

-Jordan

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(David Pilato) #4

In addition to Jordan's advices, you don't need to over allocate the number of shards.
Have a look at:
http://www.elasticsearch.org/webinars/elasticsearch-pre-flight-checklist/
http://www.elasticsearch.org/videos/big-data-search-and-analytics/

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 15 nov. 2013 à 19:24, Jordan Sissel jordan.sissel@elasticsearch.com a écrit :

On Friday, November 15, 2013 10:07:18 AM UTC-8, Jordan Sissel wrote:

On Friday, November 15, 2013 8:07:45 AM UTC-8, @mromagnoli wrote:
I am not realizing how to get the default number of shards per node or the maximun one, because I think I am having 'red' status since I have more than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

Sorry you're having issues! To the best of my knowledge, there's no maximum number of shards for a single server; if anything, it would be limited by memory available.

Red means some shards can't be loaded, but this can be caused by corrupt or missing shard data. Do you have any elasticsearch logs that might help here?

-Jordan

One additional note I forgot to mention - file count limits could be a cause of your problems here.

Pretty much all linux distros ship with a default 'open file' limit of 1024. This is a really bad default for elasticsearch, especially if you have many shards. You can see file-count related errors in ES's logs for things like "Failed to create shard" for example:

Failed to create shard, message [IndexShardCreationException[[exampl
e][1691] failed to create shard]; nested: IOException[directory '/mnt/btrfs/data/ela
sticsearch/nodes/0/indices/example/1691/index' exists and is a directory, but cannot
be listed: list() returned null]; ]]

You can find more details about checking this file count limit here: http://www.elasticsearch.org/tutorials/too-many-open-files/

-Jordan

You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(@mromagnoli) #5

Thanks for reply, Jordan. We have already set the number of the max files
for user elasticsearch to 65535 so we think is a good number. In the other
hand, we are seeing this in our log:

[netty.channel.DefaultChannelPipeline] An exception was thrown by an
exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been
shutdown
[151089] Failed to execute fetch phase
org.elasticsearch.search.SearchContextMissingException: No search context
found for id [151089]

El viernes, 15 de noviembre de 2013 13:07:45 UTC-3, @mromagnoli escribió:

I am not realizing how to get the default number of shards per node or the
maximun one, because I think I am having 'red' status since I have more
than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(@mromagnoli) #6

Thanks David, I'm gonna look those links, look very interesting and useful.

El viernes, 15 de noviembre de 2013 15:53:06 UTC-3, David Pilato escribió:

In addition to Jordan's advices, you don't need to over allocate the
number of shards.
Have a look at:
http://www.elasticsearch.org/webinars/elasticsearch-pre-flight-checklist/
http://www.elasticsearch.org/videos/big-data-search-and-analytics/

HTH

--
David :wink:
Twitter : @dadoonet / @elasticsearchfr / @scrutmydocs

Le 15 nov. 2013 à 19:24, Jordan Sissel <jordan...@elasticsearch.com<javascript:>>
a écrit :

On Friday, November 15, 2013 10:07:18 AM UTC-8, Jordan Sissel wrote:

On Friday, November 15, 2013 8:07:45 AM UTC-8, @mromagnoli wrote:

I am not realizing how to get the default number of shards per node or
the maximun one, because I think I am having 'red' status since I have more
than 100 shards in one index, and each index has in average 200K docs.

Thanks in advance!

Sorry you're having issues! To the best of my knowledge, there's no
maximum number of shards for a single server; if anything, it would be
limited by memory available.

Red means some shards can't be loaded, but this can be caused by corrupt
or missing shard data. Do you have any elasticsearch logs that might help
here?

-Jordan

One additional note I forgot to mention - file count limits could be a
cause of your problems here.

Pretty much all linux distros ship with a default 'open file' limit of
1024. This is a really bad default for elasticsearch, especially if you
have many shards. You can see file-count related errors in ES's logs for
things like "Failed to create shard" for example:

Failed to create shard, message [IndexShardCreationException[[exampl
e][1691] failed to create shard]; nested: IOException[directory
'/mnt/btrfs/data/ela
sticsearch/nodes/0/indices/example/1691/index' exists and is a directory,
but cannot
be listed: list() returned null]; ]]

You can find more details about checking this file count limit here:
http://www.elasticsearch.org/tutorials/too-many-open-files/

-Jordan

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


(system) #7