How many shards should I put in a node?


(昕玫) #1

Hi,

I'm confused about how many shards should I put in a node(64G mem and 32 processers with 500G disk). Is there a range recommended?Some project also calculate the number of shards by capacity,in their testing, a 20G's shard is most suitable. So if their total index is 1.5T, their is almost 77 shards in the cluster . Is that a right way to calculate?

I also have a problem about how to add shards when horizontal expansion. As our cluster only have one type, is it a good idea to add shards by adding index and use a index alias to point to all indices?

Thanks!


(Mark Walkom) #2

We don't recommend having shards over 50GB as it just makes reallocation harder than it should be. You can probably go larger if you only have a single node, but don't forget that to change the number of shards for an index, you need to reindex that entire index.


How many shards should I put in a physical node?(continue)
How many shards should I put in a physical node?(continue)
(昕玫) #3

Thanks for your answer!

There also some viewpoints show that we should keep shards number per node small.
Like this answer : https://groups.google.com/forum/?utm_medium=email&utm_source=footer#!topic/elasticsearch/hrq_z2qtNmo
and this experiment : http://blog.trifork.com/2014/01/07/elasticsearch-how-many-shards/
How can I keep balance between “light and easy to reallocation” and “more shard use more resource”?


(Christian Dahlqvist) #4

Your query speed will vary depending on shard size, so you will need to do some benchmarking in order to find the appropriate balance between size and speed for your dokuments and querying requirements.


(昕玫) #5

Is that mean with the shard size grow ,the query speed will be more and more slow?
For Example: a 100G shards in a physical node is slower than three 33G shards in the same node.


(Imran Siddique) #6

yes. but don't keep 1 shard per index.. Have reasonable number of shards per index.


(昕玫) #7

Thx for your all suggestions, that's super helpful ~!


(system) #8