We have been trying to achieve a breakdown of the active CPU% distributed between all the active process. Here's an example: If we have 8CPU, we see the total will be 800% and the active process consumption goes above 100% which is not ideal.
The post you linked to is kind of old. I believe that since then we made some changes. The system.process.cpu.total.norm.pct value ranges on 0 to 100%. You should be able to verify that by look at the individual events in a non-aggregated view (Discover tab) and doing a Lucene query like system.process.cpu.total.norm.pct:>=1.
Thanks AndreW. It is metricbeat 6.2.1 and still is going beyond 100%. Is there a possible way to put a calculation (system.cpu.total/no.of.cores) to get accurate values? Is system.cpu.total.pct should be considered for this? And, system.cpu.system.pct measure for?
I would consider it a bug if the normalized CPU metrics are going over 100%. Can you please open a bug report on Github for this issue and include a raw event in JSON form (you can grab that from Kibana's Discover page).
As a workaround, yes, it's definitely possible to do your own calculations. There is a system.cpu.cores metric which contains the total number of cores. With that value I think you should be able to do your own calculation.
You can do the calculation either at ingest time by using Logstash. Or you can add a scripted field to Kibana.
This is a filter example in LS that shows how you can do a calculation. (I'm not sure these calculations are relevant -- I just copied it from somewhere to demonstrate how it can be done.)
Thanks Andrew, I see the normalized CPU is defined only for process by CPU. However, I wanted to understand if there is any defined normalized CPU for system/server?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.