Filebeat not reading new logs changes when manually updated one of the logs

Manually update one of the logs in the logs directory but filebeat is not reading the logs unless filebeat service is restarted . I'm sending the logs to logstash once it's read.

Using filebeat version filebeat-1.0.0-i686

Exactly how did you update the log? Did you append to it with e.g. echo test message >> logfile.log?

I've tried to edit it manually by copying many lines and appending to the end of the log file.

I've tried updating the logs with append to it with e.g. echo test message >> logfile.log. Still the newly appended logs are read only after restarting the filebeat service.

I'd try enabling debug logs with -d * and see what comes up.

Dec 14 11:48:59 user-VirtualBox ./filebeat[6459]: prospector.go:341: Update existing file for harvesting: /log
Dec 14 11:48:59 user-VirtualBox ./filebeat[6459]: prospector.go:383: Not harvesting, file didn't change: /log
Dec 14 11:48:59 user-VirtualBox ./filebeat[6459]: prospector.go:219: Check file for harvesting: /log

The updates events are not captured . Please find the logs , removed the path and name of logs as this is customer specific logs.

Adding some more information on the logs :

Dec 14 13:09:36 user-VirtualBox ./filebeat[32553]: client.go:244: ES Ping(url=http://localhost:9200, timeout=1m30s)
Dec 14 13:09:36 user-VirtualBox ./filebeat[32553]: client.go:249: Ping request failed with: Head http://localhost:9200: dial tcp getsockopt: connection refused
Dec 14 13:09:36 user-VirtualBox ./filebeat[32553]: single.go:121: Connecting error publishing events (retrying): Head http://localhost:9200: dial tcp getsockopt: connection refused
Dec 14 13:09:36 user-VirtualBox ./filebeat[32553]: single.go:143: send fail
Dec 14 13:09:36 user-VirtualBox ./filebeat[32553]: single.go:150: backoff retry: 2s

It's not clear whether that error is related to your initial problem, but it's obviously something you have to address, e.g. by starting Elasticsearch or reconfiguring Filebeat to send events to the host where you are running ES.

Currently I've configured filebeat output to logstash . I've configured logstash out put to Elasticsearch. When I restart the filebeat it's reading the updated logs with out restarting the Elasticsearch.

From the logs it's evident that you've configured Filebeat to send directly to Elasticsearch. If this isn't what you intended, please reconfigure Filebeat first before attempting to solve any remaining issues.

Please find the Filebeat.yml file configuration below :

output configuration :

# Array of hosts to connect to.
# Scheme and port can be left out and will be set to the default (http and 9200)
# In case you specify and additional path, the scheme is required: http://localhost:9200/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:9200
hosts: ["localhost:9200"]

Logstash as output

# The Logstash hosts
hosts: [""]

# Number of workers per Logstash host.
worker: 1

My intend was to configure filebeat to output the logs to Logstash and from there to elastic search. I think the line in elastic search output section was already uncommented in the downloaded yml file.please correct me if I'm wrong.

hosts: ["localhost:9200"]

when I tried to comment the line it is throwing the error
Error Initialising publisher: no host configuration found

Comment out both the "elasticsearch:" and the "hosts:" line.

You have both outputers enabled, logstash and elasticsearch. Filebeat is waiting for ACK from all outputers. Due to no elasticsearch instance being available, internal buffers/queues eventually run full and filebeat starts blocking until it can clear it's buffers by send buffered content to elasticsearch and logstash.

please comment out the elasticsearch output and check if issue has been resolved.

1 Like

Steffen/magnusbaeck, Thanks for your suggestions , Now the issue is resolved and it's taking the logs dynamically :slight_smile:

Hii I am going through the same issue my friend . I was first using logstash-forwarder to ship my logs to logstash but unfortunately logstash- forwarder was not tailing the file. It used to start reading the file from the beginning which was not a good architectue especially in the situation when my log file can get too large . Even after starting logstash-forwarder with -tail = true command in the terminal logstash-forwarder used to only harvest the file but was not processing the events . Thats why i switched to filebeat . Now the filebeat is tailing the file when tail_files = true is done in the configuration file but its not pushing the logs to logstash dynamically - meaning i have to manually restart filebeat each and evertime so as to send the logs from filebeat to logstash . I am really pissed of with such behavior of filebeat . Further the window of logstash-forwarder was very friendly as it used to tell how many events are getting processed and all but when i run file beat using service file beat start it does not let me know whether how many events it sent and it also tries only once to connect with logstash unlike logstash-forwarder which continuously keeps trying to connect to logstash server..............So please let me know about this ASAP . We need to get production ready pretty soon

@abinay—please start a new thread for your unrelated problem.

@magnusbaeck done . Can you please answer it there .