Virtualization Performance


(Cen Giz) #1

Dears,

We are running Elastic on physical nodes with 24 Core/256 GB RAM - 1 Elastic Instance per machine and CPU utilization is around 10 %.

I have virtualized the node with Hypervisor and split the machine into 2 and deployed 1 Elastic Instance per VM - 12 Core/128 GB RAM. Each Elastic instance is running with 30 GB JVM with storage is directly attached to disk.

I was expecting to nearly doubling the capacity but I do not get even the same performance test results with physical nodes only after doubling the Elastic Instances (2 VMs per physical node).

What could be the reason? Have anyone had the same experience and what are the things I should consider to make this happen?

BR,
Cengiz


(Shane Connelly) #2

There are any number of possibilities, but a few things come to mind.

One is that you may be maxing out the attached disk(s).

Another is that you may have assumed that all of the memory that Elasticsearch uses is in the heap and by having 2x 30GB heaps, you'd get "2x the performance." However, Elasticsearch makes extensive use of a field property called doc values which is essentially a columnar store which does not reside in the heap (so making more processes to add extra heap doesn't help this). Modern operating systems add these files to the filesystem cache, so this basically becomes an in-memory (but off-heap) columnar store for much of the structured data held in Elasticsearch. In that way, adding more processes and more heap doesn't necessarily help.

Of course, when talking about "performance" it's good to know about what performance we're talking about. You can use rally to benchmark the various performance characteristics and hone in on what's improved, stayed the same, or gotten worse on the hardware/vm/software profiles.


(ddorian43) #3

Note that you should mention more the "inverted index" which gets cached by linux filesystem.

@OP: Find the bottleneck of your queries. Maybe you're not even stressing the server. You want less latency (more shards until a point) or more concurrency (less shards)


(system) #4

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.