Topic says it...my linuxbox has entered this weird cpu state...where es is taking up a lot of CPU at idle. Logs are clean with no errors. Is there something I can look at to determine the cause? This is on a home machine with very little traffic. Thank you.
There's a few threads on the same topic that'd be worth checking.
But; hot threads, GC. How are you monitoring it?
HI Mark....you're a good man for always jumping into my lame threads. I'm using htop and gkrellm. A stop/start of the service has cured this for now...I only have data from June 11th, so I don't think it's a capacity issue. Again, nothing shows in the logs...just...suddenly cpu pegs out...this is on ES 1.4.5...going to upgrade soon. Thanks again mark.
Like I mentioned, hot threads is a good place to start as well.
Thanks Mark....after researching, now I know what hot threads are When this happens again I will:
links http://127.0.0.1:9200/_nodes/hot_threads?pretty
and report the findings here. Thanks again Mark...you always help me out.
Here's what I got:
0.7% (3.5ms out of 500ms) cpu usage by thread 'elasticsearch[Ringleader][scheduler][T#1]'
10/10 snapshots sharing following 9 elements
0.0% (198.9micros out of 500ms) cpu usage by thread 'elasticsearch[Ringleader][transport_client_worker][T#2]{New I/O worker #2}'
10/10 snapshots sharing following 15 elements
0.0% (172.8micros out of 500ms) cpu usage by thread 'elasticsearch[Ringleader][transport_client_timer][T#1]{Hashed wheel timer #1}'
10/10 snapshots sharing following 5 elements
::: [logstash-gateway-28922-4324][fjA2elXlSc66pN9K2-LpTA][gateway][inet[/192.168.1.253:9301]]{data=false, client=true}
Hot threads at 2015-08-05T10:50:48.523Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=false:
0.1% (593.1micros out of 500ms) cpu usage by thread 'elasticsearch[logstash-gateway-28922-4324][[http_server_worker.default]][T#2]{New I/O worker #7}'
10/10 snapshots sharing following 15 elements
0.0% (211.6micros out of 500ms) cpu usage by thread 'Ruby-0-Thread-9: /opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.17/lib/stud/buffer.rb:92'
10/10 snapshots sharing following 31 elements
0.0% (180.4micros out of 500ms) cpu usage by thread 'elasticsearch[logstash-gateway-28922-4324][transport_client_timer][T#1]{Hashed wheel timer #1}'
10/10 snapshots sharing following 5 elements
@DigiAngel those hot threads are taken while the CPU usage is high? From those hot threads it doesn't look like the CPU is busy.
Yea that's what I thought too....I've bumped the service.....next time it happens I'll dump out ps aux as well. Thank you.
Here's what top has to say:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24197 elastic+ 20 0 917144 555264 5256 S 52.1 18.0 237:49.77 java
using 50% cpu
and from elasticsearch-parabedic:
Dazzler
ID: SUS_sfKsRcm7vNZ5ERFJxQ
IP: inet[/127.0.0.1:9200]
Host:
Load: 3.850
Size:
Docs: 5 031 059
Heap:
/
Thank you.
Here's what top has to say:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
24197 elastic+ 20 0 917144 555264 5256 S 52.1 18.0 237:49.77 java
using 50% cpu
and from elasticsearch-parabedic:
Dazzler
ID: SUS_sfKsRcm7vNZ5ERFJxQ
IP: inet[/127.0.0.1:9200]
Host:
Load: 3.850
Size:
Docs: 5 031 059
Heap:
/
Thank you.
52% isn't high.
What does hot threads say?
As soon as I removed all data besides the last 10 days this immediately cleared up and I'm back to normal cpu usage. Hot threads had nothing to say....same as above...almost nothing so...eh...guess I'll upgrade to ES 1.7.1 and see what happens. Thanks Mark and all!