Need help to overcome 100% CPU

(Andrey Seleznev) #1

Hi, everybody I`m stuck on cluster performance tuning/scale and need help.

I'm implementing solution on top of Elasticsearch. Now I`ve started load testing and 5 parallel thread requests put my cluster down.
In short, I have this configuration:

1 node - all roles, 2x6 cores, 64 GB RAM.
bootstrap.mlockall: true
indices.fielddata.cache.size: 20%
network.tcp.blocking: true
Others by default

My main index is now 5+ million documents and about 80 GB.
For the future expansion, it`s laid out for 12 shards.

The basic query is quite heavy. It filters nothing but 2 types (now is only 2 of them) but runs several (5-7) aggregations on the whole set of documents. With 1 thread query time is acceptable, about 350-700 mils. But in a multithreaded test mode CPU immediately flies up to 100%

In hot_thread i see
100.1% (500.4ms out of 500ms) cpu usage by thread 'elasticsearch[node-4][search][T#9]
94.7% (473.6ms out of 500ms) cpu usage by thread 'elasticsearch[node-4][search][T#15]
92.8% (463.9ms out of 500ms) cpu usage by thread 'elasticsearch[node-4][search][T#25]
(Can provide more details if needed)
And even EsRejectedExecutionException in els.log

If I profile the query, I see that most of the time (and apparently CPU) costs goes for the aggregations.
"took": 462,
"query": [
"query_type": "ConstantScoreQuery",
"lucene": "ConstantScore((ConstantScore(_type:bidutp) ConstantScore(_type:prgos))~1)",
"time": "81.26062800ms",
"name": "MultiCollector",
"reason": "search_multi",
"time": "348.6118540ms",
(Can post full if needed)

So now we have came to questions.
What am I doing wrong?
Is it the meter of shards count, or i should reduce the heap, or monitor GC?
Maybe investigate some more?
I will for sure add some nodes to my cluster (2-3, i don't have tons of them in my pocket) , but I need to understand whether this will be enough.

Will be grateful for any advice to help

(Andrey Seleznev) #2

Guys, please give me some hint to dig further

(Andrey Seleznev) #3

I'v added 2 nodes with 8 cores per one. Now it handles 9 reqs/sec till 100% cpu. It seems i need tuning more then HW expansion. But i dont know what to tune (((

(Adrien Grand) #4

Can you share the output of the nodes hot threads API under load?

(Andrey Seleznev) #5

In 2 hours. On my way home now

(Andrey Seleznev) #6

Hello again! Gist of hot

(Andrey Seleznev) #7

And query

(Andrey Seleznev) #8

I'm on 2.4.4 if metters

(Andrey Seleznev) #10

Now i have nginx+post cache in front of elastic. It helps a bit against dummy F5-s on main page, but i still have trouble with cpu query cost. Can someone help?

(Christian Dahlqvist) #11

What does disk I/O and iowait look like? What type of storage do you have?

(Andrey Seleznev) #12

I do not see any changes in the disk load. It is less then 5% regardless of my tests. I think my 30GB/node cache prevents the load to reach the disk level.

(Andrey Seleznev) #13

CPU under test (not most havy)

(Andrey Seleznev) #14

And Disk utilization at same time

(Andrey Seleznev) #15

Could hashed fields be handy for my cardinality aggregation?

(Andrey Seleznev) #16

If somebody interested Im still facing the problem. Any advice please?

(Adrien Grand) #17

Hot threads suggest your node is just busy running aggregations, I can't think of ways to speed this up significantly. I think you would just need to add more processing power. One thing surprised me from the hot threads: you seem to be using the niofs directory, did you opt in for it explicitly? Switching to mmapfs might help read directly into the FS cache rather than copying memory from the FS cache to Java. But I don't expect it to bring significant speedups.

(Andrey Seleznev) #18

Adrien, thanks for your reply! Now after several days of research, I also think so. I hoped only that I missed something in the configurations. I will discuss your remark about FS with our OS administrators. Thanks again.

(Andrey Seleznev) #19

Can someone tell about niofs to mmapfs switсh procedure. Is it dynamic or I`ll need to open/close or rebuild my index?

(system) #20

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.