Shard distribution problem

In one of production systems we saw a strange situation. We have two nodes
in cluster. There is a very big index with nearly 30M docs (46 G size on
disk). The index configuration was 2 shard with 1 replica at the starting.
However, after a while we disabled replicas and restarted server. Now we
notified a strange situation. We saw that two shards of this index are at
the same node. Other node has no data about this index. When I run
_cluster/state I get following section regarding this index:

events_201104: {

  shards: {
        0: [


              {
                   state: "STARTED"
                   primary: true
                   node: "0n69lW5rROWGNM75bPqAPA"
                   relocating_node: null
                   shard: 0
                   index: "events_201104"
              }
        ]

        1: [

              {
                   state: "STARTED"
                   primary: true
                   node: "0n69lW5rROWGNM75bPqAPA"
                   relocating_node: null
                   shard: 1
                   index: "events_201104"
              }
        ]
  }

}

anybody has an idea about this situation? can we relocate this shard to
other node?

Mustafa Sener
www.ifountain.com

Heya,

This can happen. The current balancing scheme is to balance based on the number of shards to get to an even number of shards per node. One way you can try and force one shard to move is to possibly close the other index (assuming you use 0.16).

There are plans for other balancing schemes including based on index size.

-shay.banon
On Monday, April 25, 2011 at 9:49 PM, Mustafa Sener wrote:

In one of production systems we saw a strange situation. We have two nodes in cluster. There is a very big index with nearly 30M docs (46 G size on disk). The index configuration was 2 shard with 1 replica at the starting. However, after a while we disabled replicas and restarted server. Now we notified a strange situation. We saw that two shards of this index are at the same node. Other node has no data about this index. When I run _cluster/state I get following section regarding this index:

events_201104: {

shards: {
0: [

{
state: "STARTED"
primary: true
node: "0n69lW5rROWGNM75bPqAPA"
relocating_node: null
shard: 0
index: "events_201104"
}
]

1: [

{
state: "STARTED"
primary: true
node: "0n69lW5rROWGNM75bPqAPA"
relocating_node: null
shard: 1
index: "events_201104"
}
]
}

}

anybody has an idea about this situation? can we relocate this shard to other node?

Mustafa Sener
www.ifountain.com

Hi,
I am using version 0.15.2. can you explain what you mean by closing other
index?

On Tue, Apr 26, 2011 at 8:44 PM, Shay Banon shay.banon@elasticsearch.comwrote:

Heya,

This can happen. The current balancing scheme is to balance based on the
number of shards to get to an even number of shards per node. One way you
can try and force one shard to move is to possibly close the other index
(assuming you use 0.16).

There are plans for other balancing schemes including based on index
size.

-shay.banon

On Monday, April 25, 2011 at 9:49 PM, Mustafa Sener wrote:

In one of production systems we saw a strange situation. We have two nodes
in cluster. There is a very big index with nearly 30M docs (46 G size on
disk). The index configuration was 2 shard with 1 replica at the starting.
However, after a while we disabled replicas and restarted server. Now we
notified a strange situation. We saw that two shards of this index are at
the same node. Other node has no data about this index. When I run
_cluster/state I get following section regarding this index:

events_201104: {

  shards: {
        0: [


              {
                   state: "STARTED"
                   primary: true
                   node: "0n69lW5rROWGNM75bPqAPA"
                   relocating_node: null
                   shard: 0
                   index: "events_201104"
              }
        ]

        1: [

              {
                   state: "STARTED"
                   primary: true
                   node: "0n69lW5rROWGNM75bPqAPA"
                   relocating_node: null
                   shard: 1
                   index: "events_201104"
              }
        ]
  }

}

anybody has an idea about this situation? can we relocate this shard to
other node?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com

Closing the other index will cause it not to be available for search, but, it will also cause a rebalancing process (assuming you are using 0.16) that will move one shard from the first node to the other (to keep the number of shards even). Then, you can open that index again. Check the Open / Close index API. Its a hack, but the simplest / fastest way I can think in getting the other shard to move around.
On Tuesday, April 26, 2011 at 9:18 PM, Mustafa Sener wrote:

Hi,
I am using version 0.15.2. can you explain what you mean by closing other index?

On Tue, Apr 26, 2011 at 8:44 PM, Shay Banon shay.banon@elasticsearch.com wrote:

Heya,

This can happen. The current balancing scheme is to balance based on the number of shards to get to an even number of shards per node. One way you can try and force one shard to move is to possibly close the other index (assuming you use 0.16).

There are plans for other balancing schemes including based on index size.

-shay.banon
On Monday, April 25, 2011 at 9:49 PM, Mustafa Sener wrote:

In one of production systems we saw a strange situation. We have two nodes in cluster. There is a very big index with nearly 30M docs (46 G size on disk). The index configuration was 2 shard with 1 replica at the starting. However, after a while we disabled replicas and restarted server. Now we notified a strange situation. We saw that two shards of this index are at the same node. Other node has no data about this index. When I run _cluster/state I get following section regarding this index:

events_201104: {

shards: {
0: [

{
state: "STARTED"
primary: true
node: "0n69lW5rROWGNM75bPqAPA"
relocating_node: null
shard: 0
index: "events_201104"
}
]

1: [

{
state: "STARTED"
primary: true
node: "0n69lW5rROWGNM75bPqAPA"
relocating_node: null
shard: 1
index: "events_201104"
}
]
}

}

anybody has an idea about this situation? can we relocate this shard to other node?

Mustafa Sener
www.ifountain.com

--
Mustafa Sener
www.ifountain.com