Hello all -
I have a single-node ES server at the moment, accepting messages from a
single logstash instance, which in turn fetches messages from a redis
We recently had a networking glitch that caused messages to queue up for a
few days. I have logs flowing again now and LS/ES are slowly catching up.
However, during this time, I've observed a peculiar CPU usage pattern
caused by elasticsearch. LS/ES have been ingesting messages as fast as
possible during this "catch up" period. During this time, this is what the
CPU usage of our LS/ES node looked like:
That increased CPU usage is due to ES.
Upon initially seeing this increased CPU load, I expected to see
corresponding iowait, but that is not the case.
What I've noticed is that near the peak of these spikes, message processing
rates actually slow down to the point where it's actually slower than our
systems are generating logs (which is quite slow at this point, something
in the neighorhood of 5 messages/second.).
During times of very high message rates (such as when our system was
catching up with a few day's queued logs), is there some sort of internal
queueing mechanism that happens within Elasticsearch that may cause this?
Has anyone else seen CPU usage patterns like this?
You received this message because you are subscribed to the Google Groups "elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
For more options, visit https://groups.google.com/groups/opt_out.