High Java CPU usage with fresh elasticsearch install

Hi, I'm testing out / learning the ELK stack in our simple environment. I have followed the simple guide here https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-ubuntu-14-04 and got a working setup out of it.

However, I had a single winlogbeat client pushing data to logstash and noticed that the performance was poor - the winlogbeat client was being rate limited by logstash according to the output displayed by winlogbeat.

I did a bit of digging, and noticed that with logstash stopped, elasticsearch reports in top as consuming 50-100+% CPU still (the same as if logstash is running). I'm running the test environment in a xenserver ubuntu 14.04 VM.

I'm new enough that I can't tell if this is a concern - it feels like it is (because its getting rate limited while logging from a single windows event log). Hot threads however looks clear -

`curl localhost:9200/_nodes/hot_threads
::: {Gorgon}{h9Yaa8yUToSMwIkYQnRCrw}{localhost}{127.0.0.1:9300}
Hot threads at 2017-12-21T11:05:46.322Z, interval=500ms, busiestThreads=3, ignoreIdleThreads=true:

0.0% (232.1micros out of 500ms) cpu usage by thread 'elasticsearch[Gorgon][transport_client_timer][T#1]{Hashed wheel timer #1}'
 10/10 snapshots sharing following 5 elements
   java.lang.Thread.sleep(Native Method)
   org.jboss.netty.util.HashedWheelTimer$Worker.waitForNextTick(HashedWheelTimer.java:445)
   org.jboss.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:364)
   org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)
   java.lang.Thread.run(Thread.java:748)'

So the elasticsearch hot threads seem to say everything is groovy (which it should be, its not doing anything!), but top seems to say that elasticsearch is grinding. Am I doing something daft?

I think I have figured this out. Maybe this will help someone else!

I found that getting cluster health ( curl -XGET 'localhost:9200/_cluster/health?pretty') was showing thousands of active, tiny shards. This was caused by the tutorials logstash output config that looks like this:

index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

What I think happened was my single dev server started pumping every historical event log entry ever through logstash, and then into elasticsearch. There wasn't much in each individual index (biggest one was under 1mb) but there was many, many days of history.

I seem to have resolved my issue by changing the index to only be YYYY.MM (and deleting the old indexes). This has dropped the elasticsearch usage to much lower.

I see from a number of online tutorials that they also use day level indexes, I'm going to try and understand now how to do that in a way that doesn't result in thousands of indexes, as from my first foray here that seems not feasible.

It is hard to come up with default. Users who have lots of data typically need daily indices, that they usually drop after a short period of retention (eg. 2 months), so they only have ~60 active indices. But if you have less data that you want to keep for a long time, this does indeed create too many indices.

Absolutely - its definitely my responsibility to configure for my environment! This has been a helpful learning experience, its forced me to look into a few key concepts and start to get a bit of an understanding into them.

Hope this points someone else in the right direction if they have a similar basic issue.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.