I'm using a 4 node Elasticsearch cluster to ingest tweets from one of the
public Twitter streams. After upgrading my from 1.3.2 to 1.4.2 I'm
noticing an odd behavior with respect to how shards are distributed across
the nodes. Whereas with 1.3.2 the 5 primary shards and 1 replica set of
shards were evenly distributed across all 4 nodes, now I'm seeing this
behavior:
All the primary shards are saved to one node
The replicas are split between two other nodes
The fourth node receives no shards at all
I'm using default values for these settings:
cluster.routing.allocation.balance.shard
cluster.routing.allocation.balance.index
cluster.routing.allocation.balance.primary
cluster.routing.allocation.balance.threshold
However I've set cluster.routing.allocation.balance.shard to 0.6f and
cluster.routing.allocation.balance.primary to 0.06f in the hope that these
settings will coerce Elasticsearch to distribute the shards more evenly.
Has anyone else see this behavior? Is this a bug or a "feature"?
I haven't seen it, but 1.4.2 has only just come out
Are you using Marvel at all? It might help shed some light on what is
happening.
Also just to confirm, did you restart all your nodes after the upgrade? Is
there anything in the logs on each node that might be of use?
I'm using a 4 node Elasticsearch cluster to ingest tweets from one of the
public Twitter streams. After upgrading my from 1.3.2 to 1.4.2 I'm
noticing an odd behavior with respect to how shards are distributed across
the nodes. Whereas with 1.3.2 the 5 primary shards and 1 replica set of
shards were evenly distributed across all 4 nodes, now I'm seeing this
behavior:
All the primary shards are saved to one node
The replicas are split between two other nodes
The fourth node receives no shards at all
I'm using default values for these settings:
cluster.routing.allocation.balance.shard
cluster.routing.allocation.balance.index
cluster.routing.allocation.balance.primary
cluster.routing.allocation.balance.threshold
However I've set cluster.routing.allocation.balance.shard to 0.6f and
cluster.routing.allocation.balance.primary to 0.06f in the hope that
these settings will coerce Elasticsearch to distribute the shards more
evenly.
Has anyone else see this behavior? Is this a bug or a "feature"?
Yes I did reboot the nodes after the upgrade and no there is nothing in the
logs that would explain this.
On Sunday, December 21, 2014 12:52:00 PM UTC-8, Mark Walkom wrote:
I haven't seen it, but 1.4.2 has only just come out
Are you using Marvel at all? It might help shed some light on what is
happening.
Also just to confirm, did you restart all your nodes after the upgrade? Is
there anything in the logs on each node that might be of use?
On 21 December 2014 at 19:47, vic hargrave <vicha...@gmail.com
<javascript:>> wrote:
I'm using a 4 node Elasticsearch cluster to ingest tweets from one of the
public Twitter streams. After upgrading my from 1.3.2 to 1.4.2 I'm
noticing an odd behavior with respect to how shards are distributed across
the nodes. Whereas with 1.3.2 the 5 primary shards and 1 replica set of
shards were evenly distributed across all 4 nodes, now I'm seeing this
behavior:
All the primary shards are saved to one node
The replicas are split between two other nodes
The fourth node receives no shards at all
I'm using default values for these settings:
cluster.routing.allocation.balance.shard
cluster.routing.allocation.balance.index
cluster.routing.allocation.balance.primary
cluster.routing.allocation.balance.threshold
However I've set cluster.routing.allocation.balance.shard to 0.6f and
cluster.routing.allocation.balance.primary to 0.06f in the hope that
these settings will coerce Elasticsearch to distribute the shards more
evenly.
Has anyone else see this behavior? Is this a bug or a "feature"?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.