Filebeat needs to be restarted in order to send logs to logstash dynamically

@ruflin So you mean I don't need to do tail_files = true in my filebeat.yml file ???? What you mean is that it Filebeat will automatically send every line only once even if tail_files = false ??

Correct

@ruflin And what if I do tail_files = true ??

https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#_tail_files

@ruflin So if tail_files = false my filebeat will read the file from the beginning but will send only the changes made (not the whole file )right ???? And with tail_files = true Filebeat will not read the whole file but will read from the last offset of the file and then will send the logs , right??????
So overall in both the cases only the recent changes will be shipped by Filebeat , and not the whole file , right ??????

Correct

@ruflin thanx ruflin .............So why the use of tail_files = true is deprecated ??? I mean what kind of loss can incurr . Please provide me a sound explanation (with example if possible). I have seen it on web but that does not sound convincing because even if log rotation is going how can logs be lost if the previous logs are not getting changed ????? I mean what ever is happening is happening with the logs that are getting added not with the previous logs so how can we have loss of logs or data ?

tail_files is not deprecated. Not sure where you saw that.

I can't really add more details then what you already found in other issues / discuss posts.

@ruflin So i can continue with tail_files = true without any fear of loss of data right ??

It is clearly stated in the docs that there is a risk of data loss.

@ruflin @steffens thanx guys for being cooperative . Finally I have found the problem . The problem was in the way I was updating my file . I was doing it maunally meaning just copying and pasting the contents . Due to this the event for change was not getting fired and as a result of this Filebeat was unable to detect any changes in the file . Hence I wrote a bash script to inculcate changes in the file . And it worked !!!!!!!!

HI
Pls find the real problem
We have 2 sets of logs /folderA/app.log and /folderB/app1.log.filebeat is pushing the logs from these folders. we want to push them
continuously as new log messages arrive but in sequential order which will be based on timestamp of logs getting updated in different files.

Let say we have 2 files
and both are getting within fraction of millisecond difference , So while
receiving log line at logstash we want to maintain the logs to be received in same sequence.
Which is not happening currently in filebeat.

Please start a new thread for your question :slight_smile:

Sorry i will move this to different thread already i opened a thread for this.

I got the same problem with filebeat. the first time I started the filebeat service it shipped all the available loginformation from the logfile to logstash (on the elastic stack server) - filebeat runs as it should be.

But if the application (which produces entries in the logfile) add more lines to the logfile, filebeat dosen't ship the new data from the file. (filebeat service is running) But when i start filebeat with filebeat -c /etc/filebeat/filebeat.yml manually on the shipper, filebeat does ship the new entries.

my config looks like this: (maybe i forget an entry to make filebeat ship for new values in the file?)

filebeat:
prospectors:
-
paths:

     - /applications/IBM/WebSphere/Profiles/AppSrv01/logs/server1/info.log
        

  input_type: log

  document_type: waslog
  registry_file: /var/lib/filebeat/registry
  config_dir: /etc/filebeat/conf.d
  output:
  elasticsearch:
  enabled: false
  hosts: ["localhost:9200"]

logstash:
hosts: ["53.74.227.151:5044"]

#tls:
#  certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

shipper:

logging:
files:
rotateeverybytes: 10485760 # = 10MB

Maybe you can help me :slight_smile:

this is how the service looks like:
patrick@was1:~> ps -ef | grep filebeat
root 3370 1 0 Jun09 ? 00:00:00 /usr/bin/filebeat-god -r / -n -p /var/run/filebeat.pid -- /usr/bin/filebeat -c /etc/filebeat/filebeat.yml
root 3371 3370 0 Jun09 ? 00:11:32 /usr/bin/filebeat -c /etc/filebeat/filebeat.yml

Same here,

Having exactly the same problem as @pat7.
Running as service needs a restart to pick up changes and runs fine for a couple of scan cycles. Then is starts failing to detect changes in logfile.

After service restart it picks up updates from before and works for couple cycles more, then fails again.

Stopping the service and running it manually with sudo filebeat -c /etc/filebeat/filebeat.yml works fine and does not fail after 2-3 scan cycles.

P.S. I will create a separate topic for that as well.

Which filebeat version are you using? We should also move this to a new thread as the initial problem was resolved.

Hey!

I already did. Here's the link: Filebeat service looses track of files (restart required)

@abinay i have same problem with you, i need restart my filebeat to send logs to logstash. can you tell me to how fix that?
thankyou

@Me_Cloud As this problem was resolved, please open a new topic and share all your details there. Please also describe there which of the solutions here you tried.