Hello,
I'm new with elastic search. I will build a productive Index with many
TB of Data and billions of documents. In my first test I build a
cluster with 128 shards, 2 replicas on 6 machines with 16 GB RAM and
SSDs.
The recovery of one node works fine, but I think it becomes slower, if
the number of shards is higher. Can I increase the speed of the
recovery?
If I shutdown the cluster with
$ curl -XPOST 'http://localhost:9200/_shutdown'
then takes the restart of the cluster very long. The testindex has a
of size 871.7gb (2539gb) with documents 6750481 (6750481). After the
restart are the shards in the state recovering or unassigned. The
primary shards have a very bad distribution. Can I change the
distribution of the primary shards? Because, elastic search write in
the primary shards and I have a very high write throughput on the
index. I think, that the distribution of the primary shards increase
the write performance.
Has anybody experience with a maximum size of a shard or the maximum
number of shards on one machine? What is better, one datanode with may
memory on one machine or many datanodes with less memory on one
machine? My goal is a high availability, scalability, read- and write
throughput.
Thanks,
Michael