Logstash stats API is showing wrong events values

Hello everyone:

I'm using logstash from a docker image, I'm consulting the stats API to show the logs processed by a pipeline to the user, but when I send 200 logs every 0.01 seconds the event information is showing wrong values. If I restart the pipeline and send a few logs every 1 or two seconds, the event information shows the right values

Node information:

"host": "19b4437c6bd2",
	"version": "7.17.4",
	"http_address": "",
	"id": "90a19e53-5c93-4268-94e6-641e763e6017",
	"name": "19b4437c6bd2",
	"ephemeral_id": "b409fac0-0a62-4a8c-a5e0-225128fbfabd",
	"status": "green",
	"snapshot": false,
	"pipeline": {
		"workers": 8,
		"batch_size": 125,
		"batch_delay": 50

Events information of the pipeline:

"syslog": {
			"events": {
				"duration_in_millis": 41578,
				"out": 13935,
				"filtered": 144,
				"queue_push_duration_in_millis": 0,
				"in": 144

During my tests, I realized that out is showing the right values of the logs indexed in elastic but not in and filtered values. Does anyone have an idea if there is a bug, or can we configure something in logstash to avoid that behavior?

Thanks in advance

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.