Changing max_number_of_shards_per_node


(Pablo Borges) #1

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and it
doesn't seem to be a configurable option (at least I couldn't find it).

Is there a way to increase this value ?

Cheers,


(Shay Banon) #2

I have removed this option for now, there should be an option to restrict
what can run on each node, but it should be more properly implemented.

-shay.banon

On Thu, Aug 19, 2010 at 9:46 AM, Pablo Borges pablort@gmail.com wrote:

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and it
doesn't seem to be a configurable option (at least I couldn't find it).

Is there a way to increase this value ?

Cheers,


(Pablo Borges) #3

So we're restricted to a maximum number of 100 shards per node ? That's not
good. :frowning:

On Thu, Aug 19, 2010 at 10:43 AM, Shay Banon
shay.banon@elasticsearch.comwrote:

I have removed this option for now, there should be an option to restrict
what can run on each node, but it should be more properly implemented.

-shay.banon

On Thu, Aug 19, 2010 at 9:46 AM, Pablo Borges pablort@gmail.com wrote:

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and it
doesn't seem to be a configurable option (at least I couldn't find it).

Is there a way to increase this value ?

Cheers,


(Shay Banon) #4

No, I mean there is no restriction. By the way, 100 shards on a node is
quite a lot, test your system to see that it does not create a load on it.

On Thu, Aug 19, 2010 at 10:04 PM, Pablo Borges pablort@gmail.com wrote:

So we're restricted to a maximum number of 100 shards per node ? That's not
good. :frowning:

On Thu, Aug 19, 2010 at 10:43 AM, Shay Banon <shay.banon@elasticsearch.com

wrote:

I have removed this option for now, there should be an option to restrict
what can run on each node, but it should be more properly implemented.

-shay.banon

On Thu, Aug 19, 2010 at 9:46 AM, Pablo Borges pablort@gmail.com wrote:

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and it
doesn't seem to be a configurable option (at least I couldn't find it).

Is there a way to increase this value ?

Cheers,


(ppearcy) #5

I think I just hit similar. Started new single machine cluster
instance, created indexes w/ 141 shards + mappings, loaded a bunch of
content. However, the cluster will not start back up until I add an
extra node.

I must admit that I know I have too many shards for some of my various
data buckets, but it'd be nice if this restriction was applied at
index creation time.

Thanks

On Aug 19, 3:50 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

No, I mean there is no restriction. By the way, 100 shards on a node is
quite a lot, test your system to see that it does not create a load on it.

On Thu, Aug 19, 2010 at 10:04 PM, Pablo Borges pabl...@gmail.com wrote:

So we're restricted to a maximum number of 100 shards per node ? That's not
good. :frowning:

On Thu, Aug 19, 2010 at 10:43 AM, Shay Banon <shay.ba...@elasticsearch.com

wrote:

I have removed this option for now, there should be an option to restrict
what can run on each node, but it should be more properly implemented.

-shay.banon

On Thu, Aug 19, 2010 at 9:46 AM, Pablo Borges pabl...@gmail.com wrote:

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and it
doesn't seem to be a configurable option (at least I couldn't find it).

Is there a way to increase this value ?

Cheers,


(Shay Banon) #6

This restriction is not there anymore, it has been removed. Those types of
restriction, by the way, can't be applied on index creation time, as a
cluster is a live thing, and what happens on creation time might not be
relevant later on.

-shay.banon

On Fri, Aug 20, 2010 at 10:13 AM, Paul ppearcy@gmail.com wrote:

I think I just hit similar. Started new single machine cluster
instance, created indexes w/ 141 shards + mappings, loaded a bunch of
content. However, the cluster will not start back up until I add an
extra node.

I must admit that I know I have too many shards for some of my various
data buckets, but it'd be nice if this restriction was applied at
index creation time.

Thanks

On Aug 19, 3:50 pm, Shay Banon shay.ba...@elasticsearch.com wrote:

No, I mean there is no restriction. By the way, 100 shards on a node is
quite a lot, test your system to see that it does not create a load on
it.

On Thu, Aug 19, 2010 at 10:04 PM, Pablo Borges pabl...@gmail.com
wrote:

So we're restricted to a maximum number of 100 shards per node ? That's
not

good. :frowning:

On Thu, Aug 19, 2010 at 10:43 AM, Shay Banon <
shay.ba...@elasticsearch.com

wrote:

I have removed this option for now, there should be an option to
restrict

what can run on each node, but it should be more properly implemented.

-shay.banon

On Thu, Aug 19, 2010 at 9:46 AM, Pablo Borges pabl...@gmail.com
wrote:

Hello there!

This shows up in cluster state (Cluster Admin from the Rest API) and
it

doesn't seem to be a configurable option (at least I couldn't find
it).

Is there a way to increase this value ?

Cheers,


(system) #7