Hi,
My Filebeat (ver 5.6.9) is currently monitoring 6 files, each file generating ~11,500 new logs per minute. 
Apparently filebeat is not capable to catch up with the logging rate.
Below is my configuration:
filebeat: 
prospectors: 
- 
paths: 
- /apps/common/logs/*/bos/thatFile.log 
input_type: log 
ignore_older: 48h 
clean_inactive: 72h 
close_renamed: true 
clean_removed: true 
fields: 
appCode: '734' 
platformType: 'Linux' 
multiline: 
pattern: '{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}' 
negate: true 
match: after 
timeout: 5s 
backoff: 1s 
- 
paths: 
- /apps/common/logs/*/olb/thisFile.log 
input_type: log 
ignore_older: 48h 
clean_inactive: 72h 
close_renamed: true 
clean_removed: true 
fields: 
appCode: '734' 
platformType: 'Linux' 
multiline: 
pattern: '{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}' 
negate: true 
match: after 
timeout: 5s 
backoff: 1s 
registry_file: "/var/lib/filebeat/registry" 
output: 
logstash: 
hosts: 
- 10.236.129.56:5045 
- 10.236.129.57:5045 
- 10.236.129.58:5045 
loadbalance: true 
ssl: 
certificate_authorities: ["/etc/filebeat/certs/agent.crt","/etc/filebeat/certs/chain.crt"] 
certificate: "/etc/filebeat/certs/agent.crt" 
key: "/etc/filebeat/certs/agent.key" 
worker: 4 
logging: 
level: debug 
to_files: true 
to_syslog: false
files: 
path: /var/log/filebeat/ 
name: filebeat.log 
keepfiles: 7
 
Is there a way to configure my filebeat such that it can keep up with the logs?
Thanks.
             
            
               
               
               
            
            
           
          
            
              
                kvch  
                (Noémi Ványi)
               
              
                  
                    March 28, 2019,  8:19am
                   
                   
              2 
               
             
            
              Could you please share your debug logs? Also, please format your configuration using </>?
What do you mean by Filebeat not capable of catching up with the logging rate? Does the data disappear before Filebeat is able to forward it? (Filebeat is rarely the slowest element of a pipeline.) What is your expectation?
             
            
               
               
               
            
            
           
          
            
            
              Is that 11500 log entries or log files per minute? How many files are you tracking? What thoroughput are you seeing?
             
            
               
               
               
            
            
           
          
            
            
              Its 11500 log entries per min
             
            
               
               
               
            
            
           
          
            
            
              Sounds unlikely that Filebeat would be the bottleneck at that rate unless the logentries are huge. How have you checked whether Filebeat is indeed the bottleneck?
             
            
               
               
               
            
            
           
          
            
            
              Hi kvch,
Here's the config
filebeat:
prospectors:
-
paths:
- /apps/common/logs/*/bos/thatFile.log
input_type: log
ignore_older: 48h
clean_inactive: 72h
close_renamed: true
clean_removed: true
fields:
appCode: '734'
platformType: 'Linux'
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}'
negate: true
match: after
timeout: 5s
backoff: 1s
-
paths:
- /apps/common/logs/*/olb/thisFile.log
input_type: log
ignore_older: 48h
clean_inactive: 72h
close_renamed: true
clean_removed: true
fields:
appCode: '734'
platformType: 'Linux'
multiline:
pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}'
negate: true
match: after
timeout: 5s
backoff: 1s
registry_file: "/var/lib/filebeat/registry"
output:
logstash:
hosts:
- 10.236.129.56:5045
- 10.236.129.57:5045
- 10.236.129.58:5045
loadbalance: true
ssl:
certificate_authorities: ["/etc/filebeat/certs/agent.crt","/etc/filebeat/certs/chain.crt"]
certificate: "/etc/filebeat/certs/agent.crt"
key: "/etc/filebeat/certs/agent.key"
worker: 4
logging:
level: debug
to_files: true
to_syslog: false
files:
path: /var/log/filebeat/
name: filebeat.log
keepfiles: 7
 
And here's the debug log
2019-03-27T14:30:05-04:00 DBG  Publish: {...something...}
2019-03-27T14:30:05-04:00 DBG  Registry file updated. 341 states written.
2019-03-27T14:30:05-04:00 DBG  Publish: {...something...} X 60
2019-03-27T14:30:05-04:00 DBG  Flushing spooler because spooler full. Events flushed: 2048
2019-03-27T14:30:05-04:00 DBG  Publish: {...something...} X 1932
2019-03-27T14:30:05-04:00 DBG  output worker: publish 2048 events
2019-03-27T14:30:05-04:00 DBG  forwards msg with attempts=-1
2019-03-27T14:30:05-04:00 DBG  message forwarded
2019-03-27T14:30:05-04:00 DBG  events from worker worker queue
2019-03-27T14:30:06-04:00 DBG  2048 events out of 2048 events sent to logstash host 10.236.129.56:5045:10200. Continue sending
2019-03-27T14:30:06-04:00 DBG  Events sent: 2048
2019-03-27T14:30:06-04:00 DBG  Processing 2048 events
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 79
2019-03-27T14:30:06-04:00 DBG  Registrar states cleaned up. Before: 341, After: 341
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 2
2019-03-27T14:30:06-04:00 DBG  Write registry file: /var/lib/filebeat/registry
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 30
2019-03-27T14:30:06-04:00 DBG  Registry file updated. 341 states written.
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 158
2019-03-27T14:30:06-04:00 DBG  Flushing spooler because spooler full. Events flushed: 2048
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 1775
2019-03-27T14:30:06-04:00 DBG  output worker: publish 2048 events
2019-03-27T14:30:06-04:00 DBG  forwards msg with attempts=-1
2019-03-27T14:30:06-04:00 DBG  message forwarded
2019-03-27T14:30:06-04:00 DBG  events from worker worker queue
2019-03-27T14:30:06-04:00 DBG  2048 events out of 2048 events sent to logstash host 10.236.129.58:5045:10200. Continue sending
2019-03-27T14:30:06-04:00 DBG  Events sent: 2048
2019-03-27T14:30:06-04:00 DBG  Processing 2048 events
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 28
2019-03-27T14:30:06-04:00 DBG  Registrar states cleaned up. Before: 341, After: 341
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...}
2019-03-27T14:30:06-04:00 DBG  Write registry file: /var/lib/filebeat/registry
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 22
2019-03-27T14:30:06-04:00 DBG  Registry file updated. 341 states written.
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 133
2019-03-27T14:30:06-04:00 DBG  Flushing spooler because spooler full. Events flushed: 2048
2019-03-27T14:30:06-04:00 DBG  Publish: {...something...} X 1861
2019-03-27T14:30:07-04:00 DBG  output worker: publish 2048 events
2019-03-27T14:30:07-04:00 DBG  forwards msg with attempts=-1
2019-03-27T14:30:07-04:00 DBG  message forwarded
2019-03-27T14:30:07-04:00 DBG  events from worker worker queue
2019-03-27T14:30:12-04:00 INFO Non-zero metrics in the last 30s: libbeat.logstash.call_count.PublishEvents=3 libbeat.logstash.publish.read_bytes=315 libbeat.logstash.publish.write_bytes=570263 libbeat.logstash.published_and_acked_events=6144 libbeat.publisher.published_events=6144 publish.events=6144 registrar.states.update=6144 registrar.writes=3
2019-03-27T14:30:12-04:00 DBG  Run prospector
2019-03-27T14:30:12-04:00 DBG  Start next scan
2019-03-27T14:30:12-04:00 DBG  Check file for harvesting: /apps/common/logs/BOS-server12/bos/BOSServices.log
2019-03-27T14:30:12-04:00 DBG  Update existing file for harvesting: /apps/common/logs/BOS-server12/bos/BOSServices.log, offset: 146506399
2019-03-27T14:30:12-04:00 DBG  Harvester for file is still running: /apps/common/logs/BOS-server12/bos/BOSServices.log
2019-03-27T14:30:12-04:00 DBG  Prospector states cleaned up. Before: 6, After: 6
2019-03-27T14:30:22-04:00 DBG  Run prospector
2019-03-27T14:30:22-04:00 DBG  Start next scan
2019-03-27T14:30:22-04:00 DBG  Check file for harvesting: /apps/common/logs/BOS-server12/bos/BOSServices.log
2019-03-27T14:30:22-04:00 DBG  Update existing file for harvesting: /apps/common/logs/BOS-server12/bos/BOSServices.log, offset: 146506399
2019-03-27T14:30:22-04:00 DBG  Harvester for file is still running: /apps/common/logs/BOS-server12/bos/BOSServices.log
2019-03-27T14:30:22-04:00 DBG  Prospector states cleaned up. Before: 6, After: 6
 
Thanks.
             
            
               
               
               
            
            
           
          
            
              
                system  
                (system)
                  Closed 
               
              
                  
                    April 25, 2019,  2:05pm
                   
                   
              7 
               
             
            
              This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.