Use filebeat file output as logstash input

I want to use the file output from filebeat as file input in logstash because there is no direct connection between the both systems. Here is my logstash config

input {
  file {
    path => "/home/martin/logs/filebeat*"
    type => "filebeat"
  }
}
output {
if [type] == "filebeat" {
  elasticsearch {
    hosts => "192.168.4.151:9200"
    index => "filebeat"
  }
}
}

Unfortunately it's not creating any index in elasticsearch. What it creates is only the since_db files. Also there no errors in the log files.

If you are using a remote mount make sure you understand the limitations.

Are you appending data to the files? The default value for start_position is end.

Yeap, that did the trick. I changed mode to read. Now i only have the problem that i always have to transfer the files every 30 minutes or so and only can do this with sftp. In filebeat i set rotation to every 20mb and a max of 40 files. Question now is what would be the best strategy on the logstash side. Let's assume i only transfer the latest filebeat file. What happens when i get the file with 19mb and then in the next cycle because filebeat rotates to filebeat.1 i get filebeat file with new data.

I tried the solution to copy every time the filebeat logs to to the logstash server. But it seems that this will be end in a conflict with the inodes of the files and logstash every time read in the logs again. What would be the best solution in the case you have no direct connection between filebeat (or whatever log store) and logstash and you whant to import rotating files.

No one?

Tracking what subset of files you have read when those files can get rotated is an extremely hard problem. The file input does its best to track this using the inode and file size. If the way you are transferring files results in the inode not being a reliable way to track things, then the file input cannot do the tracking.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.