How to find missing logs in Logstash?

I am running logstash 6.3 with 6 file inputs, applying some basic filtering and sending output to elasticsearch.

Logstash reports that it is processing 20 - 50e/s.

Comparing my input logs and elasticsearch output, I consistently find high levels of missing events (100s per hour).

There is nothing unusual about the missing events and I can see no logstash or elasticsearch log warnings or errors with these events.

No system resources are exhausted and reducing the file inputs to one appears to reduce the number of missing events but does not eliminate the problem entirely.

I have enabled the dead letter queue and queue persistence but again I can find no sign of the missing events in those locations.

The system I was hoping to build using logstash requires that no events whatsoever go missing.

So, is Logstash a reliable log/event delivery system? Events disappearing without a trace does not inspire confidence.

If it is reliable, then how exactly should users go about tracing missing events?

Thanks for your help.

/etc/logstash/logstash.yml:
	path.data: /var/lib/logstash
	path.logs: /var/log/logstash
	xpack.monitoring.enabled: true
	xpack.monitoring.elasticsearch.url: [ "x.x.x.x:9200","x.x.x.x:9200" ]
	queue.max_bytes: 3gb
	dead_letter_queue.enable: true

/etc/logstash/pipelines.yml
	- pipeline.id: es
	  path.config: "/etc/logstash/conf.d/c_es.conf"
	  pipeline.workers: 4
	  queue.type: persisted
	- pipeline.id: ansible_es
	  path.config: "/etc/logstash/conf.d/ansible_es.conf"
	  pipeline.workers: 1

/etc/logstash/conf.d/c_es.conf
	input {
			...
			file {
					path            => "/c/DE_SBC/grnti_*"
					ignore_older    => 604800 # ignore files over 1 week old
					sincedb_path    => "/c/sincedb.c_es/DE_SBC_sincedb_es0"
					start_position  => "end"
					codec           => plain {
							charset => "ISO-8859-1"
					}
					type            => "sbc"
			}
	...

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.