System fields for CPU load

I have the following linux server specs and get the CPU load from two different fields, as shown further below:

# lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48


My concern is that I get different results from these two fields:

what is the most optimum way of measuring the load of the linux server? the field 'system.load.1' looks proper when running top command.

thank you.

These System fields are described in the Metricbeat documentation:


The percentage of CPU time spent in kernel space.

type: scaled_float

format: percent



Load average for the last minute.

type: scaled_float

They are really two totally different metrics. I don't think either is better or more optimal than the other one.

how can the first one be measured on the fly? is there any command that shows that ? for example the 'system.load.1' can be viewed by top.