High system CPU usage with Elasticsearch on Kubernetes (using ECK) using ES v7.10.0+ (but not in v7.6.1)

Elasticsearch version: 7.10.0 & 7.14.2 (but not reproduced in ES v7.6.1)

Plugins installed: None

JVM version: Bundled in official Elasticsearch docker image

OS version: Reproduced on CentOS 7.9 with kernel 3.10.0 and 5.4.155 ; as well as on Ubuntu 20.04 with kernel 5.4.0

OS version: Kubernetes 1.19.x & Kubernetes v1.21.5

Description of the problem including expected versus actual behavior:

Running an Elasticsearch 7.10.0+ cluster on Kubernetes (reproduced on two different K8s distributions) using ECK 1.8.0.
While ingesting documents in Elaticsearch and running performance tests, we noticed a high (and unusual) system CPU usage (between 20 and 30% while we are CPU bound, basically user cpu is around 60% and with very little IO-Wait).

We have reproduced this high system CPU usage with ES v7.10.0, as well as with ES v7.14.2 to check if it was still present in newer releases.
It does not seem to be OS specific nor from the OS kernel versions as described above. Similarly for Kubernetes version used, and storage layer (CSI) does not matter either. We have been using statically/local Persistent volumes and OpenEBS, and in all cases the high system CPU usage issue was present with Elasticsearch v7.10.0+.

However, if we have Elasticsearch on the same topology and same hardware but only running on Docker (so without Kubernetes), this high system CPU usage is not there, so it is not just a containerized ES issue.

Moreover we did not noticed this over a year ago, but we were using the Elasticsearch version released from back then, being v7.6.1. We have just tested it again with this old ES version and sure enough we do not reproduce the high system cpu usage under the exact same condition (same data, same hardware, same ES topology, same K8s cluster, same ECK, just changing the ES version from the manifest). Therefore, we believe that something changed in Elasticsearch between v7.6.1 and v7.10.0 that causes this high system CPU usage when running on top of Kubernetes.

Steps to reproduce:

  1. Deploy an ES v7.10.0+ cluster on Kubernetes with ECK and start ingesting enough documents to reach a CPU bound state (high CPU usage on Elasticsearch Data node). We used ESRally for this with eventdata track. Using one shard per vCores allocated of the ES cluster (ES Data pods). The number of ES Data nodes does not matter we have reproduce it on 2 nodes and 75 nodes clusters. Having dedicated master nodes or without ES master nodes does not matter either we have tested these scenarios.
  2. Check the different CPU usage with something like dstat -lrvn for instance and focus on system CPU value.
  3. You should see going above 20%.
  4. Now if you do the same test with ES v7.6.1 then the system cpu usage will be much lower.

This high system CPU usage (20-30%) prevents us to reach our previous ingestion rate we had last year.

We have performed more tests on different Elasticsearch versions to narrow this down.
Our conclusion is that this high system CPU usage in Elasticsearch during data ingestion started in Elasticsearch v7.9.0


Welcome to our community! :smiley:

What sort of data are you ingesting?
How are you running these tests?

Heya @YannV - thanks for creating the discuss post (directed from this GitHub issue).

Along with Mark's request for the format of your data and how you are running your tests (i.e. your methodology and tooling), can you please share:

  • The manifests you're using to deploy your clusters with
  • The output of the Hot Threads API to see what ES is actually spending CPU time on during the benchmark. Bonus points if you can get comparisons between both 7.6 and 7.10, that'd be even better.
1 Like

Just to let you know, if it can help someone else, we have find the root cause of the problem.
We were using node.store.allow_mmap: false in the configuration of our Elasticsearch cluster deployment file and this was the reason behind the high system CPU.

This was clear in the documentation:

For production workloads, it is strongly recommended to increase the kernel setting vm.max_map_count to 262144 and leave node.store.allow_mmap unset.

Thanks for letting us all know!

This was my primary suspicion, as without setting node.store.allow_mmap: true, Elasticsearch falls back to using niofs. niofs uses the read() syscall to access the underlying segment files, which incurs a context switching overhead, which is what you see reflected in the increase in CPU time spent in the kernel (system).

Starting from 7.x, the default index store type is hybridfs , which chooses different strategies to read Lucene files based on the read access pattern (random or sequential) in order to optimise performance, i.e. niofs using the read() syscall, or mmapfs using the mmap() syscall.

By not setting node.store.allow_mmap: true, you're forcing Elasticsearch to open all files via niofs.

Hope this helps explain it a little more :slight_smile:

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.