Not all the files picked at the same time

Hi There

I am new to filebeat. I am setting up logstash server to monitor log files from different process. Process run at a fix time and output is stored in a log file.

This is my config
`
filebeat:
prospectors:
-
paths:
- /var/log/auth.log
input_type: log
document_type: syslog
-
paths:
- /home/deltion/kinaxia/stage/_logs_22609-6240/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage/_logs_22548-6238/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage/_logs_22315-6237/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage/_logs_22308-6239/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage/_logs_22290-6236/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage/_logs_22289-6235/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage
-
paths:
- /home/deltion/kinaxia/stage//error.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
backoff: 600s
backoff_factor: 1
document_type: kinaxia_stage_error
-
paths:
- /home/deltion/CNOAPI/stage/
.log
multiline.pattern: '^([0-9]{2}:[0-9]{2}:[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
document_type: automata_stage_log
-
paths:
- /home/deltion/CNOAPI/test/*.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
document_type: automata_test_log

registry_file: /var/lib/filebeat/registry

output:
logstash:
hosts: ["127.0.0.1:5044"]
bulk_max_size: 1024
tls:
certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]
file:
enabled: true
path: /home/deltion/filebeat_log
filename: filebeat
rotate_every_kb: 10485760
number_of_files: 10

shipper:

logging:
to_files: true
to_syslog: false
files:
rotateeverybytes: 10485760 # = 10MB
path: /var/log/mybeat
name: mybeat.log
keepfiles: 7
level: info
`

I don't see the out.log file from all the folders at set time.

Can you please tell me what i am doing wrong?

Thanks for your help in advance

Akilen

Process run at a fix time and output is stored in a log file.

Do you mean that you manually trigger starting of Filebeat and kill it after?
I am not sure I understand the "at set time" in your case.

for example
/home/deltion/kinaxia/stage/_logs_22609-6240/out.log
and
/home/deltion/kinaxia/stage/_logs_22548-6238/out.log

need to be picked every 10min, but only one of the file will be processed

Hello @Akilen_Pandian

Is the "out.log" a complete new file every 10 minutes?

Yes it is

@Akilen_Pandian

I have a few suggestions that should help in your case, but I will have the following assumptions:

  1. The out file is created every 10 minutes.
  2. The out file get a new inode on every creation (Filebeat tracks inodes for new file disambiguation)

You can do the following.

You don't need to define a new input per log, you can instead use wildcards.

paths:
- /home/deltion/kinaxia/stage/*/out.log
multiline.pattern: '^([0-9]{4}-[0-9]{2}-[0-9]{2})'
multiline.negate: true
multiline.match: after
input_type: log
document_type: kinaxia_stage

Note: I've removed the backoff and the backoff_factor option, I think this is why you don't get all the logs. By removing them Filebeat will scan and detect new files as they happen.

I don't know the nature or the size of the logs you are currently monitoring, but it could be possible that the file is removed when Filebeat is still processing it or when there is back pressure on the outputs, this could lead to data loss.

So I think it would wise to think about the following:

  1. Are you controlling that rotation?
  2. Instead of replacing the file could we create another file next to the previous one? You could have a process to remove older after a few hours or days.

When I started I had the configuration as you suggested, but I had the same problem. So I changed will this fix my problem

To answer your question

  1. No I don't have control over the log rotation.
  2. I need to talk with my system team to see can they generate new log file every time it run

Thanks

Hi @pierhugues

Thanks for your help.

As you suspected the issue was because of the file getting replace when it is getting processed

I have asked the system admin the change the process

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.