Should I dedicate for JAVA HEAP size a half of memory like in real machines if I use elasticsearch in VM under ESXi? Or I can use all of memory for JAVA HEAP size?
Just like on bare metal: half RAM at most.
And be careful not to overallocate resources (CPU, memory) in your VMware virtualization. If your VMs will use everything configured for them to use, and you do over-allocate, the virtualization software will swap. Swap is never desired when using Elasticsearch.
About RAM absolutelly right, but why I can`t overlocate CPU resources?
Because Elasticsearch creates thread pools with a size that's calculated relative to the number of CPU cores it detects on the machine (VM in your case).
So, if you design your cluster with a load in mind (searching, indexing etc) and you expect X ms response time, then this time is affected by the over-allocation of CPU. Even if you give 8 cores to your VM for example, and ES creates those thread pools considering this value, if ESXi decides to take some of the computing power from the ES VM then the response times will suffer.
I have 3 VM with 8 cores in 24 core host. and Now I have a 20% load of my CPU on ESXi host. LA near 2.5. I think what If I add 2 VM with 8 cores it will be more better. Because elastic do not use all shards on all VM at the same time. Isn't it?
Why wouldn't use all the shards?
Elasticsearch uses a single shard copy (primary or replica) of a shard when searching for example. What if you want to search all your indices? Doesn't this mean you will most likely use shards from all the nodes at the same time?
3 VMs with 8 cores each on a 24 cores machine is ok. But the recommendation is not to over-allocate.
I think you right. thanks for your reply.