Split brain on 2 datacenters

Hello,

This is the first I write here, but I've read many messages, excuse my
english

Currently I have a two nodes cluster , one node in each datacenter.

There is a global balancer that sends data connections randomly to each
node to share the load.

My question is, if occur a split brain due to a failure in connectivity
between datacenters and the balancer continues sending data to the two nodes,
does that mean I will have 2 nodes with different index data in the same
index?

In that situation, when connectivity is restored, do I have to reboot a node to
recover the cluster and the different messages of that node will lost? I
mean, is not possible combine the different data of the nodes?

Thanks for your help.

regards

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hiya

Currently I have a two nodes cluster , one node in each datacenter.

There is a global balancer that sends data connections randomly to
each node to share the load.

My question is, if occur a split brain due to a failure in
connectivity between datacenters and the balancer continues sending
data to the two nodes, does that mean I will have 2 nodes with
different index data in the same index?

Yes.

In that situation, when connectivity is restored, do I have to reboot
a node to recover the cluster and the different messages of that node
will lost? I mean, is not possible combine the different data of the
nodes?

Correct.

In order to prevent a split brain, you need a minimum of 3 nodes. Then
you need to set minimum_master_nodes to 2. That means that a node has
to see at least 2 "master-eligible" nodes (including itself) in order to
form a cluster.

So if one node can't see the other two, it will not try to form a
cluster itself, but will keep trying to find more nodes. The other two
nodes will continue to function as a cluster.

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Ok, I have already put minimum_master_nodes to 2, but still, in a split
brain due to network fail as I've mentioned, the solution of 3 nodes continue
losing events when connectivity come back again, correct?

Regards

On Friday, March 22, 2013 11:09:45 AM UTC+1, Clinton Gormley wrote:

Hiya

Currently I have a two nodes cluster , one node in each datacenter.

There is a global balancer that sends data connections randomly to
each node to share the load.

My question is, if occur a split brain due to a failure in
connectivity between datacenters and the balancer continues sending
data to the two nodes, does that mean I will have 2 nodes with
different index data in the same index?

Yes.

In that situation, when connectivity is restored, do I have to reboot
a node to recover the cluster and the different messages of that node
will lost? I mean, is not possible combine the different data of the
nodes?

Correct.

In order to prevent a split brain, you need a minimum of 3 nodes. Then
you need to set minimum_master_nodes to 2. That means that a node has
to see at least 2 "master-eligible" nodes (including itself) in order to
form a cluster.

So if one node can't see the other two, it will not try to form a
cluster itself, but will keep trying to find more nodes. The other two
nodes will continue to function as a cluster.

clint

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Fri, 2013-03-22 at 09:00 -0700, zxferxferz@gmail.com wrote:

Ok, I have already put minimum_master_nodes to 2, but still, in a
split brain due to network fail as I've mentioned, the solution of 3
nodes continue losing events when connectivity come back again,
correct?

No, because if a node cannot see 2 master nodes, then it won't form a
cluster and won't accept indexing requests.

clint

Regards

On Friday, March 22, 2013 11:09:45 AM UTC+1, Clinton Gormley wrote:
Hiya

    > Currently I have a two nodes cluster , one node in each
    datacenter. 
    > 
    > There is a global balancer that sends data connections
    randomly to 
    > each node to share the load. 
    > 
    > My question is, if occur a split brain due to a failure in 
    > connectivity between datacenters and the balancer continues
    sending 
    > data to the two nodes, does that mean I will have 2 nodes
    with 
    > different index data in the same index? 
    
    Yes. 
    > 
    > In that situation, when connectivity is restored, do I have
    to reboot 
    > a node to recover the cluster and the different messages of
    that node 
    > will lost? I mean, is not possible combine the different
    data of the 
    > nodes? 
    
    Correct. 
    
    In order to prevent a split brain, you need a minimum of 3
    nodes.  Then 
    you need to set minimum_master_nodes to 2.  That means that a
    node has 
    to see at least 2 "master-eligible" nodes (including itself)
    in order to 
    form a cluster. 
    
    So if one node can't see the other two, it will not try to
    form a 
    cluster itself, but will keep trying to find more nodes.  The
    other two 
    nodes will continue to function as a cluster. 
    
    clint 

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

But it means that the 2 nodes stop accepting data?

Then I lose events anyway, right?

On Monday, March 25, 2013 7:51:21 PM UTC+1, Clinton Gormley wrote:

On Fri, 2013-03-22 at 09:00 -0700, zxfer...@gmail.com <javascript:>wrote:

Ok, I have already put minimum_master_nodes to 2, but still, in a
split brain due to network fail as I've mentioned, the solution of 3
nodes continue losing events when connectivity come back again,
correct?

No, because if a node cannot see 2 master nodes, then it won't form a
cluster and won't accept indexing requests.

clint

Regards

On Friday, March 22, 2013 11:09:45 AM UTC+1, Clinton Gormley wrote:
Hiya

    > Currently I have a two nodes cluster , one node in each 
    datacenter. 
    > 
    > There is a global balancer that sends data connections 
    randomly to 
    > each node to share the load. 
    > 
    > My question is, if occur a split brain due to a failure in 
    > connectivity between datacenters and the balancer continues 
    sending 
    > data to the two nodes, does that mean I will have 2 nodes 
    with 
    > different index data in the same index? 
    
    Yes. 
    > 
    > In that situation, when connectivity is restored, do I have 
    to reboot 
    > a node to recover the cluster and the different messages of 
    that node 
    > will lost? I mean, is not possible combine the different 
    data of the 
    > nodes? 
    
    Correct. 
    
    In order to prevent a split brain, you need a minimum of 3 
    nodes.  Then 
    you need to set minimum_master_nodes to 2.  That means that a 
    node has 
    to see at least 2 "master-eligible" nodes (including itself) 
    in order to 
    form a cluster. 
    
    So if one node can't see the other two, it will not try to 
    form a 
    cluster itself, but will keep trying to find more nodes.  The 
    other two 
    nodes will continue to function as a cluster. 
    
    clint 

--
You received this message because you are subscribed to the Google
Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

On Monday, March 25, 2013 1:51:21 PM UTC-5, Clinton Gormley wrote:

No, because if a node cannot see 2 master nodes, then it won't form a
cluster and won't accept indexing requests.

This doesn't work correctly (at least on 19.12.) This morning, I had one
node(estwit16) fail three ping checks on from the master(estwit25) on a 36
node cluster with minimum_master_nodes set to 3. It seems to have cycled
through possible masters, then elected itself and started accepting data,
using itself as a master while the other 35 nodes continued with the
original master. I don't understand what happened, and would love some
help explaining this. I ended up reloading the index that was being
updated, just to make sure that it wasn't corrupt.

[2013-03-26 09:14:50,992][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true} but we do not exists on it, act as if its master failure
[2013-03-26 09:14:51,110][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [master failure, do not exists on master, act as
master failure]
[2013-03-26 09:14:51,110][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [do not exists on master, act as master failure]
[2013-03-26 09:14:51,115][DEBUG][discovery.zen.fd ] [estwit16]
[master] restarting fault detection against master
[[estwit11][1BIpE99UQaeHXWj6za5g4g][inet[/192.168.201.41:9300]]{rack=rack314,
master=true}], reason [possible elected master since master left (reason =
do not exists on master, act as master failure)]

(etc for many possible masters)

[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true} that is no longer a master
[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [master failure, no longer master]
[2013-03-26 09:14:59,187][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [no longer master]
[2013-03-26 09:14:59,187][INFO ][cluster.service ] [estwit16]
master {new
[estwit16][DwWuXm7YRP-U4Hm-ncOeGg][inet[/192.168.201.84:9300]]{rack=rack314,
master=true}, previous
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}}, removed
{[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true},}, reason: zen-disco-master_failed
([estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true})
[2013-03-26 09:14:59,677][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,740][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_12][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,798][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,859][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_16][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,925][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,056][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_14][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,114][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_25][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,171][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_26][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,238][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,301][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_19][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,482][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,547][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_11][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,661][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,722][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_31][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,805][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_37][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,877][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,944][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,006][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,065][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,123][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_29][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,250][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_23][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,306][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_45][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,364][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_48][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,425][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_07][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,484][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,541][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_24][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,599][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_20][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,659][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,721][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_39][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,781][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_49][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,841][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_47][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,904][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,966][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,029][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_19][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,091][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_16][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,321][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_15][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,387][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,454][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_33][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,521][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_09][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,587][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,651][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,717][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_06][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,784][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:17,617][TRACE][indices.recovery ] [estwit16]
[media_g2_2013_15][6] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,697][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_23][4] starting recovery from
[estwit13][vThn0kM_STCTmyadOXlNVA][inet[/192.168.201.43:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,708][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_32][0] starting recovery from
[estwit30][ymU012csTFS_4-TTCV3esA][inet[/192.168.201.131:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,077][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_35][8] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,140][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_39][9] starting recovery from
[estwit32][MH4-i124Tpupd4WG7B76QQ][inet[/192.168.201.156:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,291][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_09][4] starting recovery from
[estwit7][vnBm33StTTe9eAIlZ-VvQA][inet[/192.168.200.241:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,334][DEBUG][indices.recovery ] [estwit16]
[media_g2_2013_15][6] recovery completed from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}, took[716ms]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

I'm using 0.20.4 and it is working for me. But the question is the same,
when one node is not part of a cluster because of config or failure, what
happen? not index more data? loss data?

I think that it is a very important question in case of two datacenter, or
few nodes in a cluster.

Regards.

On Tuesday, March 26, 2013 8:46:02 PM UTC+1, Chuck McKenzie wrote:

On Monday, March 25, 2013 1:51:21 PM UTC-5, Clinton Gormley wrote:

No, because if a node cannot see 2 master nodes, then it won't form a
cluster and won't accept indexing requests.

This doesn't work correctly (at least on 19.12.) This morning, I had one
node(estwit16) fail three ping checks on from the master(estwit25) on a 36
node cluster with minimum_master_nodes set to 3. It seems to have cycled
through possible masters, then elected itself and started accepting data,
using itself as a master while the other 35 nodes continued with the
original master. I don't understand what happened, and would love some
help explaining this. I ended up reloading the index that was being
updated, just to make sure that it wasn't corrupt.

[2013-03-26 09:14:50,992][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true} but we do not exists on it, act as if its master failure
[2013-03-26 09:14:51,110][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [master failure, do not exists on master, act as
master failure]
[2013-03-26 09:14:51,110][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [do not exists on master, act as master failure]
[2013-03-26 09:14:51,115][DEBUG][discovery.zen.fd ] [estwit16]
[master] restarting fault detection against master
[[estwit11][1BIpE99UQaeHXWj6za5g4g][inet[/192.168.201.41:9300]]{rack=rack314,
master=true}], reason [possible elected master since master left (reason =
do not exists on master, act as master failure)]

(etc for many possible masters)

[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true} that is no longer a master
[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [master failure, no longer master]
[2013-03-26 09:14:59,187][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [no longer master]
[2013-03-26 09:14:59,187][INFO ][cluster.service ] [estwit16]
master {new
[estwit16][DwWuXm7YRP-U4Hm-ncOeGg][inet[/192.168.201.84:9300]]{rack=rack314,
master=true}, previous
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}}, removed
{[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true},}, reason: zen-disco-master_failed
([estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true})
[2013-03-26 09:14:59,677][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,740][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_12][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,798][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,859][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_16][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,925][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,056][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_14][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,114][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_25][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,171][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_26][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,238][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,301][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_19][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,482][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,547][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_11][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,661][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,722][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_31][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,805][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_37][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,877][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,944][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,006][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,065][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,123][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_29][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,250][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_23][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,306][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_45][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,364][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_48][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,425][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_07][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,484][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,541][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_24][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,599][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_20][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,659][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,721][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_39][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,781][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_49][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,841][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_47][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,904][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,966][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,029][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_19][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,091][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_16][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,321][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_15][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,387][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,454][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_33][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,521][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_09][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,587][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,651][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,717][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_06][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,784][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:17,617][TRACE][indices.recovery ] [estwit16]
[media_g2_2013_15][6] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,697][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_23][4] starting recovery from
[estwit13][vThn0kM_STCTmyadOXlNVA][inet[/192.168.201.43:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,708][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_32][0] starting recovery from
[estwit30][ymU012csTFS_4-TTCV3esA][inet[/192.168.201.131:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,077][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_35][8] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,140][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_39][9] starting recovery from
[estwit32][MH4-i124Tpupd4WG7B76QQ][inet[/192.168.201.156:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,291][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_09][4] starting recovery from
[estwit7][vnBm33StTTe9eAIlZ-VvQA][inet[/192.168.200.241:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,334][DEBUG][indices.recovery ] [estwit16]
[media_g2_2013_15][6] recovery completed from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}, took[716ms]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.

Hi,

anybody knows the answer?

Regards

El martes, 26 de marzo de 2013 20:54:16 UTC+1, zxfer...@gmail.com escribió:

I'm using 0.20.4 and it is working for me. But the question is the same,
when one node is not part of a cluster because of config or failure, what
happen? not index more data? loss data?

I think that it is a very important question in case of two datacenter, or
few nodes in a cluster.

Regards.

On Tuesday, March 26, 2013 8:46:02 PM UTC+1, Chuck McKenzie wrote:

On Monday, March 25, 2013 1:51:21 PM UTC-5, Clinton Gormley wrote:

No, because if a node cannot see 2 master nodes, then it won't form a
cluster and won't accept indexing requests.

This doesn't work correctly (at least on 19.12.) This morning, I had one
node(estwit16) fail three ping checks on from the master(estwit25) on a 36
node cluster with minimum_master_nodes set to 3. It seems to have cycled
through possible masters, then elected itself and started accepting data,
using itself as a master while the other 35 nodes continued with the
original master. I don't understand what happened, and would love some
help explaining this. I ended up reloading the index that was being
updated, just to make sure that it wasn't corrupt.

[2013-03-26 09:14:50,992][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true} but we do not exists on it, act as if its master failure
[2013-03-26 09:14:51,110][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [master failure, do not exists on master, act as
master failure]
[2013-03-26 09:14:51,110][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit25][0p9_41dWRWGX67GoMji5FQ][inet[/192.168.201.96:9300]]{rack=rack314,
master=true}], reason [do not exists on master, act as master failure]
[2013-03-26 09:14:51,115][DEBUG][discovery.zen.fd ] [estwit16]
[master] restarting fault detection against master
[[estwit11][1BIpE99UQaeHXWj6za5g4g][inet[/192.168.201.41:9300]]{rack=rack314,
master=true}], reason [possible elected master since master left (reason =
do not exists on master, act as master failure)]

(etc for many possible masters)

[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] pinging a master
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true} that is no longer a master
[2013-03-26 09:14:59,187][DEBUG][discovery.zen.fd ] [estwit16]
[master] stopping fault detection against master
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [master failure, no longer master]
[2013-03-26 09:14:59,187][INFO ][discovery.zen ] [estwit16]
master_left
[[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}], reason [no longer master]
[2013-03-26 09:14:59,187][INFO ][cluster.service ] [estwit16]
master {new
[estwit16][DwWuXm7YRP-U4Hm-ncOeGg][inet[/192.168.201.84:9300]]{rack=rack314,
master=true}, previous
[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true}}, removed
{[estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true},}, reason: zen-disco-master_failed
([estwit27][CVq9Su8oRASyondpovrH8g][inet[/192.168.201.128:9300]]{rack=rack314,
master=true})
[2013-03-26 09:14:59,677][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,740][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_12][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,798][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,859][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_16][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:14:59,925][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,056][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_14][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,114][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_25][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,171][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_26][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,238][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,301][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_19][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,482][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,547][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_11][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,661][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,722][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_31][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,805][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_37][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,877][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_43][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:00,944][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,006][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_10][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,065][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,123][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_29][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,250][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_23][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,306][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_45][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,364][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_48][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,425][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_07][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,484][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_12][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,541][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_24][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,599][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_20][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,659][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_38][8]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,721][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_39][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,781][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_49][1]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,841][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_47][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,904][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][2]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:01,966][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,029][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_19][7]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,091][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_16][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,321][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_15][0]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,387][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_28][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,454][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_33][3]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,521][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_09][5]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,587][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_04][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,651][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_08][4]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,717][DEBUG][gateway.local ] [estwit16]
[media_g2_2013_06][9]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:02,784][DEBUG][gateway.local ] [estwit16]
[media_g2_2012_18][6]: not allocating, number_of_allocated_shards_found
[0], required_number [1]
[2013-03-26 09:15:17,617][TRACE][indices.recovery ] [estwit16]
[media_g2_2013_15][6] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,697][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_23][4] starting recovery from
[estwit13][vThn0kM_STCTmyadOXlNVA][inet[/192.168.201.43:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:17,708][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_32][0] starting recovery from
[estwit30][ymU012csTFS_4-TTCV3esA][inet[/192.168.201.131:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,077][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_35][8] starting recovery from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,140][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_39][9] starting recovery from
[estwit32][MH4-i124Tpupd4WG7B76QQ][inet[/192.168.201.156:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,291][TRACE][indices.recovery ] [estwit16]
[media_g2_2012_09][4] starting recovery from
[estwit7][vnBm33StTTe9eAIlZ-VvQA][inet[/192.168.200.241:9300]]{rack=rack314,
master=true}
[2013-03-26 09:15:18,334][DEBUG][indices.recovery ] [estwit16]
[media_g2_2013_15][6] recovery completed from
[estwit19][oknMlgc2QnW7Xnw_oQdfPg][inet[/192.168.201.87:9300]]{rack=rack314,
master=true}, took[716ms]

--
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to elasticsearch+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.