Filebeat 6.1 not forwarding syslog and container_log to kafka (0.11.0.1)


(Sandeep Sarkar) #1

Hi ,

I have filebeat 6.1 and kafka 0.11.0.1. I need to forward syslog and container_log to kafka. Logs are not getting published, I am getting below log message :

2018/02/06 15:11:15.630752 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=6179840 beat.memstats.memory_alloc=5557664 beat.memstats.memory_total=19849368880 filebeat.harvester.open_files=2 filebeat.harvester.running=2 libbeat.config.module.running=0 libbeat.output.events.batches=20981 libbeat.output.events.failed=2098100 libbeat.output.events.total=2098100 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=1046 libbeat.pipeline.events.retry=2098100 registrar.states.current=2

(Sandeep Sarkar) #2

After sometime I get below two messages:

2018/02/06 15:13:50.715558 harvester.go:240: INFO File is inactive: /var/log/secure. Closing because close_inactive of 5m0s reached.
2018/02/06 15:13:50.762407 harvester.go:240: INFO File is inactive: /var/log/messages. Closing because close_inactive of 5m0s reached.
2018/02/06 15:14:15.632668 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=30000 beat.memstats.gc_next=5710368 beat.memstats.memory_alloc=2915184 beat.memstats.memory_total=43555674656 filebeat.events.active=2 filebeat.events.added=2 filebeat.harvester.closed=2 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.output.events.batches=20779 libbeat.output.events.failed=2077900 libbeat.output.events.total=2077900 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=1046 libbeat.pipeline.events.filtered=2 libbeat.pipeline.events.retry=2077900 libbeat.pipeline.events.total=2 registrar.states.current=2
2018/02/06 15:14:45.631898 metrics.go:39: INFO Non-zero metrics in the last 30s: beat.info.uptime.ms=29999 beat.memstats.gc_next=5560352 beat.memstats.memory_alloc=5215424 beat.memstats.memory_total=47522038376 filebeat.harvester.open_files=0 filebeat.harvester.running=0 libbeat.config.module.running=0 libbeat.output.events.batches=20897 libbeat.output.events.failed=2089700 libbeat.output.events.total=2089700 libbeat.pipeline.clients=2 libbeat.pipeline.events.active=1046 libbeat.pipeline.events.retry=2089700 registrar.states.current=2

(Carlos Pérez Aradros) #3

Hi @Sandeep_Sarkar,

Could you please share your Filebeat configuration? What do you mean by container_log?

Best regards


(Sandeep Sarkar) #4

My filebeat configuration looks something like this :

 filebeat:
   # List of prospectors to fetch data.
   prospectors:
   ####################################################################
   #    Beat the host syslog and secure log
   ####################################################################
     - type: log
       paths:
         - /var/log/messages
         - /var/log/secure
       enabled: true
       encoding: utf-8
       fields:
         document_type: syslog
       fields_under_root: true
 
   ####################################################################
   #    Added new path to beat the logs of the each containers
   ####################################################################
     - type: log
       paths:
         - /var/log/containers/*.log
       enabled: true
       encoding: utf-8
       fields:
         document_type: container_log
       fields_under_root: true
 
       # Kubernetes uses symbolic links for docker container logs
       symlinks: true
 
       # Docker logs are json logs, each line is a separate log, but we need to combine multi-line logs
       json:
         message_key: log
         keys_under_root: true
         add_error_key: true
 
       # Lines that start with Traceback(python exceptions or Caused by (Java exceptions) or blank are associated with the previous log (allows multi-line logs)
       multiline:
         pattern: '^[[:alpha:]][[:alnum:]]+Error|^Warning|^[[:alpha:]][[:alnum:]]+Warning|^StopIteration|^StopAsyncIteration|^[[:space:]]+|^Caused by:|^[[:blank:]]'
         negate: false
         match: after
 
   # Name of the registry file.
   registry_file: /var/filebeat/registry
 
 ############################# Output ##########################################
 
 # Configure what outputs to use when sending the data collected by the beat.
 # Multiple outputs may be used.
 output:
   kafka:
     hosts: ["kafka-headless:9092", "kafka-headless:9093", "kafka-headless:9094"]
 
     topic: '%{[fields.document_type]}'
 
     partition.round_robin:
       reachable_only: true
 
     required_acks: 1
 
     max_message_bytes: 20480
     bulk_max_size: 100
 
     username: filebeat
 
     # The password for connecting to Kafka
     password: ******
 
     # The configurable ClientID used for logging, debugging, and auditing purposes. The default is "beats"
     client_id: filebeat
 
     version: 0.11.0.0
 
 ############################# Logging #########################################
 
 # There are three options for the log ouput: syslog, file, stderr.
 logging:
   # Only warning
   level: warning
 
   # Disable syslog
   to_syslog: false
 
   # Disable file logging
   to_files: false

(Sandeep Sarkar) #5

Prior to filebeat 6.0, there used to be document_type which could be set as syslog or container_log. Since document_type is deprecated, I am using type=log and withing fields I have set document_type=container_log.
These are logs from container created under /var/log/container directory.


(Carlos Pérez Aradros) #6

Does Filebeat have access to those files? Tipically you need root to access them, just want to double check permissions are in place


(Sandeep Sarkar) #7

Yes.


(Carlos Pérez Aradros) #8

Uhm, then Filebeat should be getting your logs, are you running it inside a container? In that case, are you mounting the destination of the symlinks too? (/var/lib/docker/containers)


(Sandeep Sarkar) #9

Filebeat is getting logs alright, I had enabled debugging option, and I could see debugging logs, after some time I saw this message:

 Kafka publish failed with: circuit breaker is open

Sorry I didnt mention this earlier.


(Carlos Pérez Aradros) #10

This looks like a Kafka issue when retrieving partitions metadata, check this answer on a previous post, it may be of help here: Three types of WARN/ERRORs in filebeat logs


(Sandeep Sarkar) #11

Hi @exekias,

https://www.elastic.co/guide/en/beats/filebeat/6.0/kafka-output.html#_compatibility_3 talks about compatibility of kafka versions between 0.8.2.0 and 0.11.0.0. If I am at 0.11.0.1 could this break?
Also under compatibility section it says "This output works with Kafka 0.8, 0.9, and 0.10.".
Can you please clarify? What could be the safest version to work on?

Regards,
Sandeep Sarkar


(Sandeep Sarkar) #12

Hi @exekias,

When I update "version" under output.kafka in filebeat.yml to 0.11.0.1 I get below error message :

2018/02/08 09:41:15.417691 beat.go:635: CRIT Exiting: error initializing publisher:  unknown/unsupported kafka version '0.11.0.1' accessing 'output.kafka' (source:'/etc/filebeat/filebeat.yml')
Exiting: error initializing publisher: unknown/unsupported kafka version '0.11.0.1' accessing 'output.kafka' (source:'/etc/filebeat/filebeat.yml')

Please let me know.
Thanks for helping out.
Sandeep Sarkar


(Sandeep Sarkar) #13

Hi @all,

The solution to this problem is to update output.kafka.topic : '%{[document_type]}'. This is because fields_under_root: true so any field within fields can be accessed directly. For further reading https://www.elastic.co/guide/en/beats/filebeat/current/migration-changed-fields.html.

Thanks for helping out.
Sandeep Sarkar


(system) #14

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.