I am trying to ingest data from log file to elastic via logstash.
Here is the pipeline -> LOG_FILE > FILEBEAT > LOGSTASH > ELASTIC.
Not sure but recently been observing the logs are missing in elastic, upon checking I realized the log ends up in log file but not in the filebeat.
I tested with
I didn't find any log in the file not sure why filebeat input is not pulling the logs.
Here is my filebeat inputs
- type: log
hosts: ["XX1:5044", "XX6:5044"]
Is there any reason to use
close_renamed in your case? If the files filebeat is reading are rotated by renaming them, they will be closed before being completely read, take a look to the docs of this option.
Same thing for
tail_files. This option will make filebeat to start by the end of a file when opening it.
In combination, both options may make Filebeat to stop reading files too soon, and to ignore the first lines of new files.
I would suggest to try without these options, unless there is some strong reason to use them.
Other things to consider:
- Is there any reason to use Logstash in your deployment? You can send events directly from Filebeat to Elasticsearch and this would simplify your deployment.
- Consider using the
filestream input, that is intended to replace the
log input and solves some issues it had.
Thanks for reaching back,
close_renamed I use this because the file gets renamed and moved away while a new file is created with same name as updated in the filebeat.yml. I have seen few cases where Filebeat keeps looking for the old file so using this option.
tail_file : I did try removing this option but ended up with same result.
I am deploying Filestream right now and will keep you posted with new changes.
- type: filestream
That didn't help me.
It's the same. I tried to read the log with stdout on filebeat and nothing showed up.
Can you please help me debug this?
The problem is that no logs at all are collected, or only some log lines are lost?
Only some logs are not collected.
Are you sure that the rotations happen by moving the file and creating a new one? Or it is copying and truncating?
Please take a look to these troubleshooting docs in case they give you some idea Log rotation results in lost or duplicate events | Filebeat Reference [8.4] | Elastic
The attached link is not helping me and here is my observation.
I am seeing logs from account off hours but when I do try in peak hours I see no logs.
This concludes that filebeat is not able to read all the logs. Any inputs on boosting filebeat performance?
So is log rotation happening by copy and truncating the files, or by moving the file and creating a new one?
Have you considered the idea of removing Logstash from the equation? Is there any reason you need it in your deployment?
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.