FileBeat Not sending to kafka

HI All

I m faced with an issue where my filebeat instance is either not sending files to my Kafka instances or its not reding the spicified log file in my filebeat.yml config

my config looks like this :

output.kafka:
hosts: ["*************:9096", "*************t:9096"]
topic: bigdata-logs
version: 0.10.0

username: "******************"
password: "******************"

compression: snappy
required_acks: 1
ssl.enabled: true
ssl.certificate_authorities: ["C:/filebeat/TestRootCA.cer"]
ssl.verification_mode: full
ssl.supported_protocols: [TLSv1.1, TLSv1.2]

my filebeat log output is as follows:

2017-10-18T09:35:41+02:00 INFO Metrics logging every 10s
2017-10-18T09:35:41+02:00 INFO Setup Beat: filebeat; Version: 5.6.3
2017-10-18T09:35:41+02:00 INFO Activated kafka as output plugin.
2017-10-18T09:35:41+02:00 INFO Publisher name: bigdata
2017-10-18T09:35:41+02:00 INFO Flush Interval set to: 1s
2017-10-18T09:35:41+02:00 INFO Max Bulk Size set to: 2048
2017-10-18T09:35:41+02:00 INFO filebeat start running.
2017-10-18T09:35:41+02:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2017-10-18T09:35:41+02:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2017-10-18T09:35:41+02:00 INFO States Loaded from registrar: 18
2017-10-18T09:35:41+02:00 INFO Starting Registrar
2017-10-18T09:35:41+02:00 INFO Loading Prospectors: 1
2017-10-18T09:35:41+02:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-10-18T09:35:41+02:00 INFO Start sending events to output
2017-10-18T09:35:41+02:00 INFO Prospector with previous states loaded: 3
2017-10-18T09:35:41+02:00 INFO Starting prospector of type: log; id: 17626560764703821254
2017-10-18T09:35:41+02:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-10-18T09:35:41+02:00 INFO Harvester started for file: ***********************************
2017-10-18T09:35:51+02:00 INFO Non-zero metrics in the last 10s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 publish.events=4 registrar.states.current=18 registrar.states.update=4 registrar.writes=1
2017-10-18T09:36:01+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:11+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:21+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:31+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:41+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:51+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:01+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:11+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:21+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:31+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:41+02:00 INFO No non-zero metrics in the last 10s

can someone please assist , no logs being shipped to kafka and no error in the logs

Can you share your complete filebeat configuration? Please format configs and logs using the </> button, so your posts will be more readable + indentations is preserved (very important for config files).

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.

- input_type: log

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - C:\SalesTriggers\logs\Release\*Console.log
    #- /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ["^DBG"]

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ["^ERR", "^WARN"]

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: [".gz$"]

  # Optional additional fields. These field can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Mutiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after
  encoding: utf-16be
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after
  ignore_older: 72h


#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
name: bigdata
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging
fields:
    env: prd
    app: Sales-Triggers
#================================ Outputs =====================================

# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #hosts: ["localhost:9200"]

  # Optional protocol and basic auth credentials.
  #protocol: "https"
  #username: "elastic"
  #password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

#-------------------------- Kafka output ------------------------------
output.kafka:
  hosts: ["prdhost1.com:9096", "prdhost2.com:9096"]
  topic: bigdata-logs
  version: 0.10.0

  username: "user****"
  password: "pass******"
  
  compression: snappy
  required_acks: 1
  ssl.enabled: true
  ssl.certificate_authorities: ["C:/filebeat/TestRootCA.cer"]
  ssl.verification_mode: full
  ssl.supported_protocols: [TLSv1.1, TLSv1.2]

#================================ Logging =====================================

# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: debug
logging.metrics.enabled: true
logging.metrics.period: 10s 
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
    indent preformatted text by 4 spaces

2017-10-18T09:35:41+02:00 INFO Metrics logging every 10s
2017-10-18T09:35:41+02:00 INFO Setup Beat: filebeat; Version: 5.6.3
2017-10-18T09:35:41+02:00 INFO Activated kafka as output plugin.
2017-10-18T09:35:41+02:00 INFO Publisher name: bigdata
2017-10-18T09:35:41+02:00 INFO Flush Interval set to: 1s
2017-10-18T09:35:41+02:00 INFO Max Bulk Size set to: 2048
2017-10-18T09:35:41+02:00 INFO filebeat start running.
2017-10-18T09:35:41+02:00 INFO Registry file set to: C:\ProgramData\filebeat\registry
2017-10-18T09:35:41+02:00 INFO Loading registrar data from C:\ProgramData\filebeat\registry
2017-10-18T09:35:41+02:00 INFO States Loaded from registrar: 18
2017-10-18T09:35:41+02:00 INFO Starting Registrar
2017-10-18T09:35:41+02:00 INFO Loading Prospectors: 1
2017-10-18T09:35:41+02:00 INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2017-10-18T09:35:41+02:00 INFO Start sending events to output
2017-10-18T09:35:41+02:00 INFO Prospector with previous states loaded: 3
2017-10-18T09:35:41+02:00 INFO Starting prospector of type: log; id: 17626560764703821254
2017-10-18T09:35:41+02:00 INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2017-10-18T09:35:41+02:00 INFO Harvester started for file: C:\SalesTriggers\logs\Release\Barclays.Console.log
2017-10-18T09:35:51+02:00 INFO Non-zero metrics in the last 10s: filebeat.harvester.open_files=1 filebeat.harvester.running=1 filebeat.harvester.started=1 publish.events=4 registrar.states.current=18 registrar.states.update=4 registrar.writes=1
2017-10-18T09:36:01+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:11+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:21+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:31+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:41+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:36:51+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:01+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:11+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:21+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:31+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:41+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:37:51+02:00 INFO No non-zero metrics in the last 10s
2017-10-18T09:38:01+02:00 INFO No non-zero metrics in the last 10sPreformatted text

Above is the full config minus the kaka server details as well as the username and password
The log file is provided as well

I'm missing some message from the kafka output + no metrics about events being ACKed. Can you check with netstat if filebeat is even connecting? Also check kafka logs.

1 Like

checked Netsatat -o and cant find pid 3036 , which is the pid for the filebeat service which is running is running

Anything in kafka logs?

You have some firewall rules that hinder beats from doing the connection bootstrapping?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.