Hello everyone,
I'm a real newbie on elastic search.
I played a little with logstash, elasticsearch (with head plugin) and
kibana with two replicated nodes and 1 index splitted into 5 shards on the
2 nodes.
I do my test like a real nag on .deb 0.19.9 version on a debian.
I try to verify if my replication is working fine.
So I create my first node.
I set datas on it and i can see
/var/lib/elasticsearch/logstashcluster/nodes/0/indices/stash having folders
0 to 4 created.
I create my second node, subscribe to index "stash". Then I can see it also
create and replicate my shards in
/var/lib/elasticsearch/logstashcluster/nodes/0/indices/stash
If I wildly erase a shard with "rm -rf
/var/lib/elasticsearch/logstashcluster/nodes/0/indices/stash/1", for
example, my cluster status remains green, and this shard is never
replicated again until I restart the elasticsearch service.
Is it normal ? What can I do to check the replication status or force it to
"rebuild" ?
My test should appear to be wild but isn't it what it should happens with a
corrupted hard drive or raid volume in the real life ?
Thank you,
JCD.
--