Unable to get files from filebeat to kafka

Am new in using filebeat 7.0.0 and populate log files to kafka 2.10.
   ###################### Filebeat Configuration Example #=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    enabled: false

    Paths that should be crawled and fetched. Glob based paths.

    paths:
    • /usr/local/src/ste/iotlogs/df/ori/.log
      #- /var/log/
      .log
      #- c:\programdata\elasticsearch\logs*

    Exclude lines. A list of regular expressions to match. It drops the lines that are

    matching any regular expression from the list.

    #exclude_lines: ['^DBG']

    Include lines. A list of regular expressions to match. It exports the lines that are

    matching any regular expression from the list.

    #include_lines: ['^ERR', '^WARN']

Exclude files. A list of regular expressions to match. Filebeat drops the files that

are matching any regular expression from the list. By default, no files are dropped.

#exclude_files: ['.gz$']

Optional additional fields. These fields can be freely picked

to add additional information to the crawled log files for filtering

#fields:

level: debug

review: 1

Multiline options

Multiline can be used for log messages spanning multiple lines. This is common

for Java Stack Traces or C-Line Continuation

The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

#multiline.pattern: ^[

Defines if the pattern set under pattern should be negated or not. Default is false.

#multiline.negate: false

Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

that was (not) matched before or after or as long as a pattern is not matched based on negate.

Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#-------------------------- Kafka output ------------------------------
output.kafka:

initial brokers for reading cluster metadata

hosts: "crf1:6667"

message topic selection + partitioning

topic: 'STE-DF-OR'

But when i give below command
./filebeat -e -c filebeat.yml
It starts looking the file using harvester but no file transferred to Kafka.

@massiveashok Kindly provide filebeat.yml using </>

Please see below is my complete filebeat.yml file

</`###################### Filebeat Configuration Example #########################

#=========================== Filebeat inputs =============================

filebeat.inputs:

  • type: log

    Change to true to enable this input configuration.

    enabled: false

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /usr/local/src/ste/iotlogs/df/ori/.log
      #- /var/log/
      .log
      #- c:\programdata\elasticsearch\logs*
      #============================= Filebeat modules
      filebeat.config.modules:

    Glob pattern for configuration loading

    path: ${path.config}/modules.d/*.yml

    Set to true to enable config reloading

    reload.enabled: false

    Period on which files under path should be checked for changes

    #reload.period: 10s

#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#============================== Dashboards =====================================

These settings control loading the sample dashboards to the Kibana index. Loading

the dashboards is disabled by default and can be enabled either by setting the

options here or by using the setup command.

#setup.dashboards.enabled: false

The URL from where to download the dashboards archive. By default this URL

has a value which is computed based on the Beat name and version. For released

versions, this URL points to the dashboard archive on the artifacts.elastic.co

website.

#setup.dashboards.url:

#============================== Kibana =====================================

Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

This requires a Kibana endpoint configuration.

#setup.kibana:

Kibana Host

Scheme and port can be left out and will be set to the default (http and 5601)

In case you specify and additional path, the scheme is required: http://localhost:5601/path

IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

Kibana Space ID

ID of the Kibana Space into which the dashboards should be loaded. By default,

the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

The cloud.id setting overwrites the output.elasticsearch.hosts and

setup.kibana.host options.

You can find the cloud.id in the Elastic Cloud web UI.

#cloud.id:

The cloud.auth setting overwrites the output.elasticsearch.username and

output.elasticsearch.password settings. The format is <user>:<pass>.

#cloud.auth:

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#-------------------------- Kafka output ------------------------------
output.kafka:

initial brokers for reading cluster metadata

hosts: "cch1w01:6667"

message topic selection + partitioning

topic: 'STE-DF-ORI'

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

#hosts: ["localhost:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

Configure processors to enhance or manipulate events generated by the beat.

processors:

  • add_host_metadata: ~
  • add_cloud_metadata: ~

#================================ Logging =====================================

#================================= Migration ==================================

This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true
`>

@massiveashok select all of your content and then select </> option on editor. Otherwise reading of config/log file becomes tough. What you have replied is just as the same format what you have shared earlier.

For your understanding I have changed your config format like below. Kindly use this kind of format in future query. It will be very helpful for all community members.

#=========================== Filebeat inputs =============================

filebeat.inputs:

* type: log

# Change to true to enable this input configuration.

enabled: false

# Paths that should be crawled and fetched. Glob based paths.

paths:
  * /usr/local/src/ste/iotlogs/df/ori/ <em>.log
#- /var/log/</em> .log
#- c:\programdata\elasticsearch\logs*
#============================= Filebeat modules
filebeat.config.modules:

# Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

# Set to true to enable config reloading

reload.enabled: false

# Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#================================ General =====================================

# The name of the shipper that publishes the network data. It can be used to group

# all the transactions sent by a single shipper in the web interface.

#name:

# The tags of the shipper are included in their own field with each

# transaction published.

#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the

# output.

#fields:

# env: staging

#============================== Dashboards =====================================

# These settings control loading the sample dashboards to the Kibana index. Loading

# the dashboards is disabled by default and can be enabled either by setting the

# options here or by using the  `setup`  command.

#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL

# has a value which is computed based on the Beat name and version. For released

# versions, this URL points to the dashboard archive on the [artifacts.elastic.co](http://artifacts.elastic.co/)

# website.

#setup.dashboards.url:

#============================== Kibana =====================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.

# This requires a Kibana endpoint configuration.

#setup.kibana:

# Kibana Host

# Scheme and port can be left out and will be set to the default (http and 5601)

# In case you specify and additional path, the scheme is required: http://localhost:5601/path

# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601

#host: "localhost:5601"

# Kibana Space ID

# ID of the Kibana Space into which the dashboards should be loaded. By default,

# the Default Space will be used.

#space.id:

#============================= Elastic Cloud ==================================

# These settings simplify using filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the  `output.elasticsearch.hosts`  and

# `setup.kibana.host`  options.

# You can find the  `cloud.id`  in the Elastic Cloud web UI.

#cloud.id:

# The cloud.auth setting overwrites the  `output.elasticsearch.username`  and

# `output.elasticsearch.password`  settings. The format is  `<user>:<pass>` .

#cloud.auth:

#================================ Outputs =====================================

# Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:

# Array of hosts to connect to.

#hosts: ["localhost:9200"]

# Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#-------------------------- Kafka output ------------------------------
output.kafka:

# initial brokers for reading cluster metadata

hosts: "cch1w01:6667"

# message topic selection + partitioning

topic: 'STE-DF-ORI'

#----------------------------- Logstash output --------------------------------
#output.logstash:

# The Logstash hosts

#hosts: ["localhost:5044"]

# Optional SSL. By default is off.

# List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

# Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

# Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Processors =====================================

# Configure processors to enhance or manipulate events generated by the beat.

processors:

* add_host_metadata: ~
* add_cloud_metadata: ~

#================================ Logging =====================================

#================================= Migration ==================================

# This allows to enable 6.7 migration aliases

#migration.6_to_7.enabled: true

After verifying your filebeat.yml file I want to confirm
What is your input? (Files or modules) because you have not enabled the filebeat input of type log in your config file.

My input is file. I have noticed the enable option and changed to true.

Am using below commands to start and run the filebeat.
which is giving some logs in frequency of seconds

./filebeat
./filebeat -e -c filebeat.yml

2019-04-30T13:32:02.101+0530 INFO [monitoring] log/log.go:144 Non-zero metrics in the last 30s {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":1370,"time":{"ms":19}},"total":{"ticks":4410,"time":{"ms":51},"value":4410},"user":{"ticks":3040,"time":{"ms":32}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":9},"info":{"ephemeral_id":"0aaeabf1-54cb-47da-9e7c-6e1c28d43c2d","uptime":{"ms":2790023}},"memstats":{"gc_next":6516304,"memory_alloc":5365160,"memory_total":1069359768}},"filebeat":{"events":{"active":-3,"added":15,"done":18},"harvester":{"open_files":2,"running":2}},"libbeat":{"config":{"module":{"running":0}},"output":{"events":{"acked":18,"batches":7,"total":18}},"outputs":{"kafka":{"bytes_read":684,"bytes_write":9263}},"pipeline":{"clients":5,"events":{"active":0,"published":15,"total":15},"queue":{"acked":18}}},"registrar":{"states":{"current":3,"update":18},"writes":{"success":7,"total":7}},"system":{"load":{"1":0.34,"15":0.46,"5":0.4,"norm":{"1":0.0425,"15":0.0575,"5":0.05}}}}}}

Then if i see the kafka consumer using below command its showing their logs.
Can't able to see the file or content in my topic.

./kafka-console-consumer.sh --bootstrap-server cchf:6667 --topic STE-DF-ORI

{"@timestamp":"2019-04-30T08:03:32.048Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.0.0","pipeline":"filebeat-7.0.0-system-auth-pipeline","topic":"STE-DF-ORI"},"agent":{"ephemeral_id":"f3327bc2-bc8e-43ce-a9e0-d4ce1f869886","hostname":"cch1wpsteris01","id":"aaad65da-9d96-4d61-b049-576faf72d5ca","version":"7.0.0","type":"filebeat"},"message":"Apr 30 13:33:30 [localhost] su: pam_unix(su-l:session): session opened for user zeppelin by (uid=0)","input":{"type":"log"},"fileset":{"name":"auth"},"ecs":{"version":"1.0.0"},"log":{"offset":10117568,"file":{"path":"/var/log/secure"}},"event":{"dataset":"system.auth","module":"system"},"service":{"type":"system"},"host":{"architecture":"x86_64","os":{"version":"7 (Core)","family":"redhat","name":"CentOS Linux","kernel":"3.10.0-957.10.1.el7.x86_64","codename":"Core","platform":"centos"},"id":"d81ea9c43584477c8f8be42569521b05","containerized":true,"hostname":"cchf","name":"cchf"}}

Do i need to enable Elasticsearch , if am sending files from filebeat to kafka?

There is no need to enable elasticsearch to send data to kafka.

Do you have any update on this issue?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.