Am new in using filebeat 7.0.0 and populate log files to kafka 2.10.
###################### Filebeat Configuration Example #=========================== Filebeat inputs =============================
filebeat.inputs:
Each - is an input. Most options can be set at the input level, so
you can use different inputs for various configurations.
Below are the input specific configurations.
- type: log
Change to true to enable this input configuration.
enabled: falsePaths that should be crawled and fetched. Glob based paths.
paths:- /usr/local/src/ste/iotlogs/df/ori/.log
#- /var/log/.log
#- c:\programdata\elasticsearch\logs*
Exclude lines. A list of regular expressions to match. It drops the lines that are
matching any regular expression from the list.
#exclude_lines: ['^DBG']Include lines. A list of regular expressions to match. It exports the lines that are
matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN'] - /usr/local/src/ste/iotlogs/df/ori/.log
Exclude files. A list of regular expressions to match. Filebeat drops the files that
are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
Optional additional fields. These fields can be freely picked
to add additional information to the crawled log files for filtering
#fields:
level: debug
review: 1
Multiline options
Multiline can be used for log messages spanning multiple lines. This is common
for Java Stack Traces or C-Line Continuation
The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^[
Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
that was (not) matched before or after or as long as a pattern is not matched based on negate.
Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#============================= Filebeat modules ===============================
filebeat.config.modules:
Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
Set to true to enable config reloading
reload.enabled: false
Period on which files under path should be checked for changes
#reload.period: 10s
#==================== Elasticsearch template setting ==========================
setup.template.settings:
index.number_of_shards: 1
#index.codec: best_compression
#_source.enabled: false
#-------------------------- Kafka output ------------------------------
output.kafka:
initial brokers for reading cluster metadata
hosts: "crf1:6667"
message topic selection + partitioning
topic: 'STE-DF-OR'
But when i give below command
./filebeat -e -c filebeat.yml
It starts looking the file using harvester but no file transferred to Kafka.