Logstash monitoring and pipelines

Hi,

I have stopped Filebeat, no logs are pushed anymore.
It is confirmed by all screens from logstash monitoring (like logstahs event received rate, or ES indexing rate), they are all set to 0, BUT the 'logstash pipeline" screen shows the tree with numbers inside.
like 800 e/s.
and this number is moving/refreshing around that value forever.
see snapshot here:
image

but here:
image
how is that possible?

thanks,
Rod

Hey,

If you set your time picker to the last 30 minutes is it still showing around 900 event/second... Just wondering if it's because it's averaging out the last hour, in which case there were still a number of events received during that period?

time picker is visible from the overview tab where I can see there is not event:

But it is not visible when I click on "pipelines" tab:

We are now 14 hours after I have stopped filebeats, and pipeline still display events/s
and this rate is moving, it's not fixed, like if there were activity, but all other graph from logstash or ES shows there is nothing

is that a bug?

thanks,
Rod

Hi Rod - what versions of Elasticsearch and Kibana are you using?

Just to verify, if you go to Monitoring>Logstash>Nodes>Select your node>Advanced....is there anything in the queue?

on top of Docker 18.03.0-ce I use those versions:
FROM docker.elastic.co/logstash/logstash-x-pack:6.2.3
FROM docker.elastic.co/elasticsearch/elasticsearch-basic:6.2.3
FROM docker.elastic.co/kibana/kibana-x-pack:6.2.3

I had to restart my VM.
The queue is empty:

every chart shows there is no event processed, even iostats shows there is no disk access.
the main pipeline view says 0e/s:


but when I click on "main" I see events/s, and this number is moving:
image

then a few sec later:
image

Hi Rod - I think I know why you're seeing this, but I will need some time to repro your state on the same versions you're running. I'll get back to you as soon as I can!

Hi Rod, great that you're on 6.2!

The reason you're seeing these numbers even when your Filebeat is stopped and your queue is empty is due to how PipelineViewer queries its data. Next to the pipeline name there's a dropdown that allows you to select any LS pipeline version for which there is monitoring data.

As the viewer exists today, when a pipeline is selected, it displays the lastest value over the lifespan of the pipeline. You may have noticed that there's no timespan picker present on that page, this is to indicate that we're not looking at the same slice of time as your other monitoring views. Regarding metrics like e/s, they're an aggregate of the total events output by the plugin, divided by the time series interval. As more time passes, the numerator (event count) in the calculation of metrics like e/s stays the same, but the denominator (the size of the time series interval) grows linearly, which is why you see changes in your numbers, which I'm guessing are gradually decreasing.

To reproduce these conditions I set up a stack with MetricBeat a few days ago and let it run for about six hours. Then I killed MetricBeat and let the rest of my stack keep running and collecting monitoring data. If you check my screenshots below, you'll notice the same pattern you've seen. Additionally, I started my MetricBeat back up again and the e/s metrics began to rise over time.

MetricBeat running

MetricBeat stopped ~1 hour

MetricBeat stopped 30+ hours

MetricBeat running again for ~1 hour

If I can help clarify or answer any follow up questions you may have don't hesitate to ask.

Have a great day.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.