Active_primary_shards spontaneously doubled?


(Mike) #1

Hi, I have a four-node cluster configured with 8 primary shards and 1 replica. I'm using the Java client, but the equivalent yml file would look like this on each node:

cluster.name: clusterA
node.name: ca1
node.master: true
node.data: true
index.number_of_shards: 8
index.number_of_replicas: 1
path.data: /analyticsData
bootstrap.mlockall: true
discovery.zen.minimum_master_nodes: 3
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["node1[9300-9399]", "node2[9300-9399]", "node3[9300-9399]", "node4[9300-9399]"]

Of course node.name would differ per node.

Once three nodes are up (due to minimum_master_nodes setting), all is well. Fourth node comes up, and all is well; _cluster/health says:

{
    cluster_name: "clusterA",
    status: "green",
    timed_out: false,
    number_of_nodes: 4,
    number_of_data_nodes: 4,
    active_primary_shards: 8,
    active_shards: 16,
    relocating_shards: 0,
    initializing_shards: 0,
    unassigned_shards: 0
}

At some point node "ca4" is killed (kill -9, or some other possibly unfriendly way), and restarted. The results:

{
    cluster_name: "clusterA",
    status: "green",
    timed_out: false,
    number_of_nodes: 4,
    number_of_data_nodes: 4,
    active_primary_shards: 16,
    active_shards: 32,
    relocating_shards: 0,
    initializing_shards: 0,
    unassigned_shards: 0
}

????????

How is this possible? How do we spontaneously have double primary shards? Where do I even start looking for clues to this situation? This is ES 1.1.1.

Thanks.


(Mark Walkom) #2

Does the number of primaries drop or does it stay the same?

I'd suggest an upgrade to 1.5.2, 1.1.1 is getting a little old.


(Mike) #3

Well my question turned out to be dumb. I was assisting another team, and lacked complete information. Turns out they were starting one of the nodes against an existing path.data, which of course had two indices. Hence 8 shards * 2 indices = 16 primary shards. Sigh. Thanks.


(system) #4