Stalled pipeline breaks source file with logrotation


This system is running in a Production environment so I wont cause the error but if it happens again I'll try to get more info. This is what I do have. I am running logstash 1.5 on Redhat 6 with the a file input and Redis output. The source file is written to by a Python program and tended by Logrotate. The Redis output has a congestion threshold that when met the Python program is eventually not be able to write to the source file. The timing of the errors looks like it corresponds with logrotate. I haven't been able to find any logs yet except for logstash's congestion warning.

Does anyone know what might be going on? I will update with more info if it happens again.


Logstash warning:
{:timestamp=>"2015-09-03T13:34:21.013000-0600", :message=>"Redis key size has hit a congestion threshold 20000 suspending output for 5 seconds", :level=>:warn}

Logrotate parameters:
create 744 logstash logstash
rotate 1

Logstash file input:
path=> `

(Magnus B├Ąck) #2

I don't see why the Python program would be unable to write to the file. Logstash doesn't impose any locks on its input files. What does "not be able to write to the source file" mean, exactly?


Since Logstash was broken I lost the logs and not know what exceptions might have been thrown. The system is in production so I cant cause a failure but I'll try to replicate the problem and get back to you.


I can't reproduce the issue on another machine so I think it's an issue with the machine instead of the file input.

(system) #5