Rebalance primary shards

Hello:

I've got a setup where I have 3 nodes and each index has 3 shards + 1
replica. I'm really memory constrained so sorting is thrashing my
fielddata cache. I'm working to get more memory, but in the meantime I
think I can fit the fielddata cache if I just query against primary shards
(keeping replicas strictly for HA). I think I can accomplish this using
the _primary_first preference setting. What I'm struggling with is how to
have ElasticSearch balance the shards so that each node has 2 shards of an
index and the primary shards are not on the same machine. Usually, they
are pretty balanced, but when I have to update some settings and restart
each node, it inevitably ends up that a node has 2 primaries on it.

I've tried playing with the cluster.routing.allocation.balance.* settings
but haven't had any luck. Is there any other way to force this? I'm
generally assuming that each shard is roughly the same size...

Any advice?

Thank you so much!
Andy O

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Take a look at cluster.routing.allocation.awareness.attributes as well,
that should do what you want.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 20 November 2013 03:32, Andrew Ochsner aochsner@cs.stanford.edu wrote:

Hello:

I've got a setup where I have 3 nodes and each index has 3 shards + 1
replica. I'm really memory constrained so sorting is thrashing my
fielddata cache. I'm working to get more memory, but in the meantime I
think I can fit the fielddata cache if I just query against primary shards
(keeping replicas strictly for HA). I think I can accomplish this using
the _primary_first preference setting. What I'm struggling with is how to
have Elasticsearch balance the shards so that each node has 2 shards of an
index and the primary shards are not on the same machine. Usually, they
are pretty balanced, but when I have to update some settings and restart
each node, it inevitably ends up that a node has 2 primaries on it.

I've tried playing with the cluster.routing.allocation.balance.* settings
but haven't had any luck. Is there any other way to force this? I'm
generally assuming that each shard is roughly the same size...

Any advice?

Thank you so much!
Andy O

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hmm. I have looked at it a few times, but maybe I'm missing something.
How can I ensure primaries are balanced equally across the nodes within an
index?

Seems the allocation.awareness has more to do w/ ensuring shards get
allocated across a group of nodes. In my case, I only have 3 nodes and
they are all equal.

Seems allocation.balance does the following:
shard -> ensures each node has roughly equal number of shards across the
cluster
index -> ensures each node has roughly equal number of shards within an
index
primary -> ensures each node has roughly equal number of primary shards
across the cluster

What's missing (I think) is what I want (I think):
index.primary -> ensures each node has roughly equal number of primary
shards within an index

On Tuesday, November 19, 2013 2:50:20 PM UTC-6, Mark Walkom wrote:

Take a look at cluster.routing.allocation.awareness.attributes as well,
that should do what you want.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com <javascript:>
web: www.campaignmonitor.com

On 20 November 2013 03:32, Andrew Ochsner <aoch...@cs.stanford.edu<javascript:>

wrote:

Hello:

I've got a setup where I have 3 nodes and each index has 3 shards + 1
replica. I'm really memory constrained so sorting is thrashing my
fielddata cache. I'm working to get more memory, but in the meantime I
think I can fit the fielddata cache if I just query against primary shards
(keeping replicas strictly for HA). I think I can accomplish this using
the _primary_first preference setting. What I'm struggling with is how to
have Elasticsearch balance the shards so that each node has 2 shards of an
index and the primary shards are not on the same machine. Usually, they
are pretty balanced, but when I have to update some settings and restart
each node, it inevitably ends up that a node has 2 primaries on it.

I've tried playing with the cluster.routing.allocation.balance.* settings
but haven't had any luck. Is there any other way to force this? I'm
generally assuming that each shard is roughly the same size...

Any advice?

Thank you so much!
Andy O

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Take a look at http://cache.preserve.io/mjjk82tv/index.html as it has an
example that might be of use.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: markw@campaignmonitor.com
web: www.campaignmonitor.com

On 20 November 2013 08:49, Andrew Ochsner aochsner@cs.stanford.edu wrote:

Hmm. I have looked at it a few times, but maybe I'm missing something.
How can I ensure primaries are balanced equally across the nodes within an
index?

Seems the allocation.awareness has more to do w/ ensuring shards get
allocated across a group of nodes. In my case, I only have 3 nodes and
they are all equal.

Seems allocation.balance does the following:
shard -> ensures each node has roughly equal number of shards across the
cluster
index -> ensures each node has roughly equal number of shards within an
index
primary -> ensures each node has roughly equal number of primary shards
across the cluster

What's missing (I think) is what I want (I think):
index.primary -> ensures each node has roughly equal number of primary
shards within an index

On Tuesday, November 19, 2013 2:50:20 PM UTC-6, Mark Walkom wrote:

Take a look at cluster.routing.allocation.awareness.attributes as well,
that should do what you want.

Regards,
Mark Walkom

Infrastructure Engineer
Campaign Monitor
email: ma...@campaignmonitor.com
web: www.campaignmonitor.com

On 20 November 2013 03:32, Andrew Ochsner aoch...@cs.stanford.eduwrote:

Hello:

I've got a setup where I have 3 nodes and each index has 3 shards + 1
replica. I'm really memory constrained so sorting is thrashing my
fielddata cache. I'm working to get more memory, but in the meantime I
think I can fit the fielddata cache if I just query against primary shards
(keeping replicas strictly for HA). I think I can accomplish this using
the _primary_first preference setting. What I'm struggling with is how to
have Elasticsearch balance the shards so that each node has 2 shards of an
index and the primary shards are not on the same machine. Usually, they
are pretty balanced, but when I have to update some settings and restart
each node, it inevitably ends up that a node has 2 primaries on it.

I've tried playing with the cluster.routing.allocation.balance.*
settings but haven't had any luck. Is there any other way to force this?
I'm generally assuming that each shard is roughly the same size...

Any advice?

Thank you so much!
Andy O

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.