I am using Filebeat 5.6 to send logs to elastic. After starting the file beat , always getting below line in log file.
C:\opt\filebeat\bin>filebeat -e -d "publish"
2018/06/29 14:43:03.372455 beat.go:297: INFO Home path: [C:\opt\filebeat\bin] Co
nfig path: [C:\opt\filebeat\bin] Data path: [C:\opt\filebeat\bin\data] Logs path
: [C:\opt\filebeat\bin\logs]
2018/06/29 14:43:03.372455 metrics.go:23: INFO Metrics logging every 30s
2018/06/29 14:43:03.373455 beat.go:192: INFO Setup Beat: filebeat; Version: 5.6.
2
2018/06/29 14:43:03.373455 logstash.go:90: INFO Max Retries set to: 3
2018/06/29 14:43:03.373455 outputs.go:108: INFO Activated logstash as output plu
gin.
2018/06/29 14:43:03.374455 publish.go:243: DBG Create output worker
2018/06/29 14:43:03.374455 publish.go:285: DBG No output is defined to store th
e topology. The server fields might not be filled.
2018/06/29 14:43:03.374455 publish.go:300: INFO Publisher name: ldncagsaqi600ua
2018/06/29 14:43:03.382455 async.go:63: INFO Flush Interval set to: 1s
2018/06/29 14:43:03.382455 async.go:64: INFO Max Bulk Size set to: 2048
2018/06/29 14:43:03.382455 async.go:72: DBG create bulk processing worker (inte
rval=1s, bulk size=2048)
2018/06/29 14:43:03.383455 beat.go:233: INFO filebeat start running.
2018/06/29 14:43:03.384455 registrar.go:85: INFO Registry file set to: C:\opt\fi
lebeat\bin\data\registry
2018/06/29 14:43:03.384455 registrar.go:106: INFO Loading registrar data from C:
\opt\filebeat\bin\data\registry
2018/06/29 14:43:03.385455 registrar.go:123: INFO States Loaded from registrar:
0
2018/06/29 14:43:03.385455 registrar.go:236: INFO Starting Registrar
2018/06/29 14:43:03.385455 crawler.go:38: INFO Loading Prospectors: 1
2018/06/29 14:43:03.385455 sync.go:41: INFO Start sending events to output
2018/06/29 14:43:03.385455 spooler.go:63: INFO Starting spooler: spool_size: 204
8; idle_timeout: 5s
2018/06/29 14:43:03.386455 prospector_log.go:65: INFO Prospector with previous s
tates loaded: 0
2018/06/29 14:43:03.387455 prospector.go:124: INFO Starting prospector of type:
log; id: 17456318105674495528
2018/06/29 14:43:03.387455 crawler.go:58: INFO Loading and starting Prospectors
completed. Enabled prospectors: 1
2018/06/29 14:43:33.372455 metrics.go:34: INFO No non-zero metrics in the last 3
0s
2018/06/29 14:44:03.370455 metrics.go:34: INFO No non-zero metrics in the last 3
0s
2018/06/29 14:44:33.368455 metrics.go:34: INFO No non-zero metrics in the last 3
0s
2018/06/29 14:45:03.367455 metrics.go:34: INFO No non-zero metrics in the last 3
0s
Please help what is wrong here.
YML Config below
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.full.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
#=========================== Filebeat prospectors =============================
filebeat.prospectors:
# Each - is a prospector. Most options can be set at the prospector level, so
# you can use different prospectors for various configurations.
# Below are the prospector specific configurations.
- input_type: log
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
-D:\QISDSP\Releases\Release 4.6.2.3\MQA Service\Log.txt
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ["^DBG"]
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ["^ERR", "^WARN"]
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: [".gz$"]
# Optional additional fields. These field can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Mutiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
#================================ General =====================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
#================================ Outputs =====================================
# Configure what outputs to use when sending the data collected by the beat.
# Multiple outputs may be used.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["169.171.160.12:5601"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["sd-0ec1-e570.nam.nsroot.net:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
#================================ Logging =====================================
# Sets log level. The default log level is info.
# Available log levels are: critical, error, warning, info, debug
#logging.level: info
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]