Logstash stops writing to output file

We have configured beats to collect access logs on different servers, and write those to logstash.
After a while, logstash (latest version stops writing to the output file. The process keeps running but it stops writing to the output file. We have increased heap without result.

Sample Logstash configuration for creating a simple

Beats -> Logstash -> Elasticsearch pipeline.

input {
beats {
port => 5045
}
}

filter{
mutate {
#add_field => { "gatewayhost" => "%{[host][name]}" }
replace => { "message" => "%{[host][name]} %{[message]}" }
}
}

output {
elasticsearch {
hosts => ["http://localhost:9200"]
#index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}

#print to stdout for debugging
#stdout { codec => rubydebug }

file
{
#based on the instancename. instancename is defined in the filebeat filebeat.accesslogs.yml file per instance then
#path => "/data/COLLECTLOGS/%{[host][name]}/%{[fields][instancename]}-%{+YYYY-MM-dd}.log"
path => "/data/COLLECTLOGS/allinstances-%{+YYYY-MM-dd}.log"
codec => line { format => "%{[message]}" }
}
}

output.logstash:

The Logstash hosts

hosts: ["10.82.230.50:5045"]
bulk_max_size: 1000
timeout: 10
ttl: 5

We also played with the timout setting, ttl, builk max etc ... but the end result is always the same.

The disk on logstash is not full, access rights on target directory are ok.
When snooping on the network we can see lots of retransmissions from logstash to filebeat. We wonder if that is normal. On filebeat we continuously see:

ERROR pipeline/output.go:121 Failed to publish events: read tcp 10.82.191.37:49530->10.82.230.50:5045: i/o timeout

That sounds like the outputs are backing up, causing back-pressure on the pipeline, which eventually prevents the inputs from consuming more data. If you remove the elasticsearch output does the file output continue to write data?

If it does, then does the elasticsearch log give any indication of why it is not indexing?