Close_removed not working properly?

Sure.

/etc/filebeat/filebeat.yml:

filebeat.spool_size: 2048
filebeat.publish_async: false
filebeat.idle_timeout: 5s
filebeat.registry_file: .filebeat
filebeat.config_dir: /etc/filebeat/conf.d
filebeat.shutdown_timeout: 0
name: production-app-001.domain.tld
fields_under_root: false
queue_size: 1000
max_procs: 
output.logstash:
  hosts:
    - logstash-receiver-001.domain.tld:5044
    - logstash-receiver-002.domain.tld:5044
  loadbalance: true

Then /etc/filebeat/conf.d/app.yml
---
filebeat:
prospectors:
- input_type: log
paths:
- /var/log/Application//structured-.json
encoding: plain
fields:
'@application': Application
'@environment': production
'@group': blue
fields_under_root: true
tags:
- app
ignore_older: 4h
document_type: app_logger
scan_frequency: 10s
harvester_buffer_size: 16384
max_bytes: 10485760
tail_files: false
backoff: 1s
max_backoff: 10s
backoff_factor: 2
close_inactive: 5m
close_renamed: false
close_removed: true
close_eof: false
clean_inactive: 4h1m
clean_removed: true
close_timeout: 0

Is there a way that I can tell whether the output was blocked at this time? I don't remember seeing anything in the filebeat log itself otherwise I would hope tat I would have included it.

I would guess it was the case to some extent. Our logstash instances that were receiving messages from filebeat had been down for almost 5 hours... upon starting them up again there's 6-800 servers and VM's that started sending their logs in again.

I did see events from this file. As mentioned originally, I definitely saw info from the one specific log file I mentioned about 12 hours late, so, 7 hours or so after the logstash instances were started again. I'm unsure if I can check whether there were any in between those two times or how much of that specific file made it into the logging system as I think the original file and the data in elasticsearch has now been removed as we only keep some data for 14 days.