Index.routing.allocation.require realloc not working


(Alexandre Derumier) #1

Hi,
I'm trying to implement an hot-warm architecture like this blog:

I have a 3 nodes cluster, all nodes are master and datas nodes.
with 1 replica and 6 shards

2 nodes have box_type = hot
node:
box_type: hot
data: true
master: true
name: hotnode1 (and 2)

index:
codec: best_compression
number_of_replicas: 1
number_of_shards: 6

1 node have box_type = cold

node:
box_type: cold
data: true
name: coldnode1

index:
codec: best_compression
number_of_replicas: 1
number_of_shards: 6

setting require.box_type to hot (so all shards should move to hotnode1,hotnode2)

curl -XPUT "http://elastic1:9200/palo-firewall-2016.01.21/_settings" -d'

{
"index.routing.allocation.require.box_type":"hot"
}'
{"acknowledged":true

verify indice settings

GET elastic1:9200/palo-firewall-2016.01.21/_settings?pretty.01.21/_settings" -d'
{
"palo-firewall-2016.01.21" : {
"settings" : {
"index" : {
"routing" : {
"allocation" : {
"require" : {
"box_type" : "hot"
}
}
},
"creation_date" : "1453394476424",
"number_of_shards" : "6",
"number_of_replicas" : "1",
"uuid" : "cNyk594NTW-0O9OkHMvF1g",
"version" : {
"created" : "2010199"
}
}
}
}
}

shards list:
curl -XGET 'http://hotnode1:9200/_cat/shards'

palo-firewall-2016.01.21 2 p STARTED 162 264kb X.X.X.X coldnode1
palo-firewall-2016.01.21 3 r STARTED 165 354.1kb X.X.X.X hotnode1
palo-firewall-2016.01.21 3 p STARTED 165 494.9kb X.X.X.X coldnode1
palo-firewall-2016.01.21 1 r STARTED 181 307kb X.X.X.X hotnode1
palo-firewall-2016.01.21 1 p STARTED 181 321.3kb X.X.X.X coldnode1
palo-firewall-2016.01.21 5 p STARTED 156 166kb X.X.X.X hotnode2
palo-firewall-2016.01.21 5 r STARTED 156 166kb X.X.X.X coldnode1
palo-firewall-2016.01.21 4 r STARTED 162 245.5kb X.X.X.X hotnode2
palo-firewall-2016.01.21 4 p STARTED 162 245.5kb X.X.X.X coldnode1
palo-firewall-2016.01.21 0 r STARTED 166 185.4kb X.X.X.X hotnode2
palo-firewall-2016.01.21 0 p STARTED 166 205.7kb X.X.X.X coldnode1

How can I debug reallocation ? I don't see anything is logs.


(Mark Walkom) #2

What does the rest of your ES config look like?

Also, you have too many shards, for the data size you have there 1 primary is more than enough.


(Alexandre Derumier) #3

What does the rest of your ES config look like?

hotnode1

cluster:
name: elastic
discovery:
zen:
ping:
multicast:
enabled: false
unicast:
hosts:
- hotnode1.domain.com
- hotnode2.domain.com
- coldnode1.domain.com
index:
codec: best_compression
number_of_replicas: 1
number_of_shards: 6
network:
host: non_loopback:ipv4
node:
box_type: hot
data: true
master: true
name: hotnode1
path:
data: /mnt/disk1

hotnode2

hotnode1

cluster:
name: elastic
discovery:
zen:
ping:
multicast:
enabled: false
unicast:
hosts:
- hotnode1.domain.com
- hotnode2.domain.com
- coldnode1.domain.com
index:
codec: best_compression
number_of_replicas: 1
number_of_shards: 6
network:
host: non_loopback:ipv4
node:
box_type: hot
data: true
master: true
name: hotnode2
path:
data: /mnt/disk1

coldnode1

cluster:
name: elastichosting
discovery:
zen:
ping:
multicast:
enabled: false
unicast:
hosts:
- hotnode1.domain.com
- hotnode2.domain.com
-coldnode1.odiso.com
index:
codec: best_compression
number_of_replicas: 1
number_of_shards: 6
network:
host: 10.3.94.74
node:
box_type: cold
data: true
master: true
name: coldnode1
path:
data: /archive/elasticsearch

Also, you have too many shards, for the data size you have there 1 primary is more than enough.

Well, this was with a demo indice, but I have currently 50GB indices by day.
I'm using 6shards, because in the future, I'll use more ssd disks. (up to 6disk by node, so shards will split across disks for better throughput).


(Mark Walkom) #4

That's still too many, you will end up wasting heap, when SSDs are fast enough.


(Christian Dahlqvist) #5

A Hot/Warm architecture does not really make sense for a 3 node cluster. As your Warm/Cold zone only have 1 node, you do not have any place to put the replica shard you have configured once your indices are relocated to the Warm/Cold zone.


(Alexandre Derumier) #6

The cold node use a big raid6 for archiving,
the 2 hot nodes don't have any raid. (job ssd)

I was planing to delete the replica once the indice is move to cold node.
But even with that, the index.routing still not working.

Is they any way to have some debug log ?


(system) #7