High CPU usages in Elastic Node

High CPU usages in Elastic Node

In a development setting, I have both ElasticSearch, Kibana and MetricBeat running on the same server.

The CPU resources are so high that I couldn’t remote onto the server. It is only when I shut down the Kibana Services that I can regain control of the server.

This is first observed after upgrading from 7.10.0 to 7.13.2.

Setup is configured to “Collecting Elasticsearch monitoring data with Metricbeat”. I can use the Stack Monitoring UI to see the health of all the nodes in the cluster, including Kibana.

There are no errors recorded in all the logs (Elasticsearch, Kibana, and Metricbeat). Apart from those in Kibana that failed as there are insufficient resources available that mostly resulted in “bad gateway” errors.

Workaround: The CPU usage stabilized when kibana monitorings are switch off. i.e. kibana-xpack module disabled and monitoring.enabled: false in Kibana.yml.

Seeking advices in troubleshooting the issues. Would really appreciate any helps! Thanks!

Here the breakdown of the Development Environment (on-premise)

Architecture: Cluster, 4 data nodes, 3 master nodes

Server having issues:

Data Node with Kibana (this is the initial server used to proof out Elastic before setting it up as part of a cluster)

Server: Windows Server 2016 (non-physical) 16 Gigabytes RAM, 80 Gigabytes (One of my Oliver Twist “Please Sir I want some more” moment)

Elastic Version: 7.13.2 using bundled OpenJdk 16.0, set to 8 GB in vm.options

***** Check List ******

*** ElasticSearch ***

  • xpack.monitoring.collection.enabled = true

  • xpack.monitoring.elasticsearch.collection.enabled to false

*** MetricBeat ***

On all Elasticsearch nodes

  • metricbeat modules enable elasticsearch-xpack

  • Configure elasticsearch-xpack.yml

  • Use builtin user: remote_monitoring user

  • Configuration correct: Able to see status of individual nodes in Stack Monitoring

  • metricbeat modules disable system

  • metricbeat modules enable kibana-xpack

  • Use builtin user: remote_monitoring user

  • Configuration correct: Able to see status of kibana in Stack Monitoring

*** Kibana ***

monitoring.enabled: true

monitoring.kibana.collection.enabled: false

Hello,

Does this issue looks like it applies to you?

cc @jbudz

Thanks
Bhavya

Thanks, Bhavya

The request Update default memory limit look interesting.

The basis of this request is to ensure that Node.js does not use more memory than available and terminate when the its memory is exhausted.

It is a very likely scenario for me, where the CPU resources run amok. Currently, I stop the Kibana Services to free up the resources.

This is self-inflicted, with limited resources, Kibana and Elasticsearch Services are running on the same machine. This problem wasn’t observed in Production as the services are running on different machines.

To workaround this, I am reducing the default memory limit in Kibana.
Setting the --max-old-space-size in the node.options config file found inside the kibana/config folder. Reference: Use Kibana in a production environment

I am going to keep tabs on machine for a few days to make sure that it resolves the problem.

Thanks for the helps!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.