Need the vCPU requirement for GCP

Hi Team,

we are plan to move our Elasticsearch environment to Google cloud platform,
our cluster have 3 master nodes+ 5 data nodes

having 423 indices,2100 shards

How many vCPU need to allocate for each node,

please suggest for this,

Thanks in advance.

That will depend entirely on the workload. How many do you have in the current cluster? Is it performing OK?

@Christian_Dahlqvist

currently we are using bare metal servers. May i know how to check the vcpu.

How many cores do each server have? Is this sufficient for the expected/current workload? Do you ever max out CPU in the current cluster?

$ lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96

is this correct one?

Yes. The key here is to determine how large portion of the available logical CPUs that you actually use at peak. Do you have monitoring installed?

NO, we are not installed any monitoring tools,

Is there any other ways to check this.

I would recommend monitoring CPU usage on the nodes over a period of time that covers you peak load period(s). That way you will get an idea how many logical CPU cores you actually use. Setting up Elasticsearch monitoring should give you an indication but you can also use other tools and monitor this at the OS level.

Okay thanks,

we will configure the monitoring tools.

@Christian_Dahlqvist

os.cpu.usage is this correct metric we need to monitor for cpu usage ,could you please confirm.

That seems to be a good stat to look at. What is the maximum value over a longer period? Does this metric show percentage of total CPU usage per host?

This metric shows the usage of the each node in the cluster in percentage term.

In max of it takes 7% .

If you are not expecting an increase in load it would then seem like each node may only need 8 vCPU cores. I would start there and monitor the cluster and increase a bit if you see CPU getting saturated or you experience performance problems.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.