Filebeat seems stop send information to logstash

I have Filebeat installed on three servers. It sends information to the logstash than send to the Elastic Search.

It should create a index for each day like the example bellow:

health status index                                         pri rep docs.count docs.deleted store.size pri.store.size
yellow open   systemout_tmc-**2016.02.17**    5   1        146                     0              350.6kb        350.6kb

However it is not happening. Today I've just looked to the index list and it is not created.

Here is my output file:

output {
      if [type] == "systemout_tmc" {
        elasticsearch {
                hosts => ["localhost:9200"]
                index => "systemout_tmc-%{+YYYY.MM.dd}"
        }
      }

}

If I restart the filebeat and Logstahs everything work fine.

Any idea what is going on?

Thanks in advance.

So after you restarted Logstash, the indexes are now created as expected? Or after restart it only creates one for the day it was restarted?

After I restarted the Logstash and the Filebeat the index is created as expected (one index daily).

I just did this rigth now and I got the index:

yellow open systemout_tmc-2016.02.18 5 1 26 0 69.9kb 69.9kb

That means the issue is resolved?

No actually. Because tomorrow the index will not be create as expected as the previous days.
I will need to restart the logstash and filebeat again. I need this proccess be automatically.

Start by looking in the logs of Logstash and Filebeat. I'm sure there's some clue in either log.

I got this logs from the logstash:

{:timestamp=>"2016-02-18T19:11:26.443000+0100", :message=>"SIGTERM received. Shutting down the pipeline.", :level=>:warn}

And this from filebeat:

2016-02-19T11:28:52+01:00 DBG Not harvesting, file didn't change: /opt/IBM/WebSphere/Profiles/base/logs/tmc/SystemOut.log
2016-02-19T11:28:57+01:00 DBG Flushing spooler because of timemout. Events flushed: 0

So, something appears to be sending Logstash a SIGTERM signal. Is that the problem then, that Logstash goes down every day?

I'm sorry, I sent the wrong line on the log.

The logstash is working fine when after few hours I receive this message:

{:timestamp=>"2016-02-21T16:54:45.915000+0100", :message=>"Beats input: the pipeline is blocked, temporary refusing new connection.", :level=>:warn}

And the logstash stop to receive the logs from filebeat.

which filebeat version are you using?

This message is output by logstash if it finds the outputs/filters creating too much back-pressure and then logstash closes the connection to filebeat.

The filebeat version is 1.0.1 (amd64).

How can I avoid to logstash close the connections?

Logstash closing the connections is just a symptom of the actual problem, namely that Logstash's pipeline is stalled. IIRC there was a bug in either Filebeat or Logstash that caused such situations to happen unnecessarily. Dig through the release notes of both tools and see if it might apply to you.

I just updated the FIlebeat to 1.1.1 and I haven't got the error again. It seems working fine now.

I will watch closely until tomorrow.

Thank you!

The update solved the problem.

Thank you very much for the help!