Error while starting filebeat

[fuseadmin@a0110pcsgesb01 filebeat-5.5.0-linux-x86_64]$ ./filebeat -e -c filebeat.yml
2018/04/29 19:35:47.194084 beat.go:285: INFO Home path: [/app/filebeat-5.5.0-linux-x86_64] Config path: [/app/filebeat-5.5.0-linux-x86_64] Data path: [/app/filebeat-5.5.0-linux-x86_64/data] Logs path: [/app/filebeat-5.5.0-linux-x86_64/logs]
2018/04/29 19:35:47.194186 beat.go:186: INFO Setup Beat: filebeat; Version: 5.5.0
2018/04/29 19:35:47.194275 metrics.go:23: INFO Metrics logging every 30s
2018/04/29 19:35:47.194428 output.go:258: INFO Loading template enabled. Reading template file: /app/filebeat-5.5.0-linux-x86_64/filebeat.template.json
2018/04/29 19:35:47.196028 output.go:269: INFO Loading template enabled for Elasticsearch 2.x. Reading template file: /app/filebeat-5.5.0-linux-x86_64/filebeat.template-es2x.json
2018/04/29 19:35:47.197190 output.go:281: INFO Loading template enabled for Elasticsearch 6.x. Reading template file: /app/filebeat-5.5.0-linux-x86_64/filebeat.template-es6x.json
2018/04/29 19:35:47.198291 client.go:128: INFO Elasticsearch url: http://10.89.13.26:5044
2018/04/29 19:35:47.198343 outputs.go:108: INFO Activated elasticsearch as output plugin.
2018/04/29 19:35:47.198946 publish.go:295: INFO Publisher name: a0110pcsgesb01
2018/04/29 19:35:47.201780 async.go:63: INFO Flush Interval set to: 1s
2018/04/29 19:35:47.201807 async.go:64: INFO Max Bulk Size set to: 50
2018/04/29 19:35:47.202731 beat.go:221: INFO filebeat start running.
2018/04/29 19:35:47.202829 registrar.go:85: INFO Registry file set to: /app/filebeat-5.5.0-linux-x86_64/data/registry
2018/04/29 19:35:47.202876 registrar.go:106: INFO Loading registrar data from /app/filebeat-5.5.0-linux-x86_64/data/registry
2018/04/29 19:35:47.202917 registrar.go:123: INFO States Loaded from registrar: 0
2018/04/29 19:35:47.202993 crawler.go:38: INFO Loading Prospectors: 1
2018/04/29 19:35:47.203164 prospector_log.go:65: INFO Prospector with previous states loaded: 0
2018/04/29 19:35:47.203166 sync.go:41: INFO Start sending events to output
2018/04/29 19:35:47.203067 registrar.go:236: INFO Starting Registrar
2018/04/29 19:35:47.203394 prospector.go:124: INFO Starting prospector of type: log; id: 12509029797428055740
2018/04/29 19:35:47.203428 spooler.go:63: INFO Starting spooler: spool_size: 2048; idle_timeout: 5s
2018/04/29 19:35:47.203430 crawler.go:58: INFO Loading and starting Prospectors completed. Enabled prospectors: 1
2018/04/29 19:35:47.204110 log.go:91: INFO Harvester started for file: /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service02/data/log/fuse.log
2018/04/29 19:35:47.204308 log.go:91: INFO Harvester started for file: /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service03/data/log/fuse.log
2018/04/29 19:35:47.204464 log.go:91: INFO Harvester started for file: /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service04/data/log/fuse.log
2018/04/29 19:35:47.204718 log.go:91: INFO Harvester started for file: /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service05/data/log/fuse.log
2018/04/29 19:35:47.204758 log.go:91: INFO Harvester started for file: /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service01/data/log/fuse.log
2018/04/29 19:35:47.750111 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.89.13.26:5044: read tcp 10.89.13.5:47302->10.89.13.26:5044: read: connection reset by peer
2018/04/29 19:35:48.762721 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.89.13.26:5044: read tcp 10.89.13.5:47340->10.89.13.26:5044: read: connection reset by peer
2018/04/29 19:35:50.766408 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.89.13.26:5044: read tcp 10.89.13.5:47420->10.89.13.26:5044: read: connection reset by peer
2018/04/29 19:35:54.769024 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.89.13.26:5044: read tcp 10.89.13.5:47554->10.89.13.26:5044: read: connection reset by peer
2018/04/29 19:36:02.771662 single.go:140: ERR Connecting error publishing events (retrying): Get http://10.89.13.26:5044: read tcp 10.89.13.5:47736->10.89.13.26:5044: read: connection reset by peer
^C2018/04/29 19:36:16.630485 filebeat.go:230: INFO Stopping filebeat
2018/04/29 19:36:16.630584 crawler.go:90: INFO Stopping Crawler
2018/04/29 19:36:16.630601 crawler.go:100: INFO Stopping 1 prospectors
2018/04/29 19:36:16.630609 prospector.go:205: INFO Prospector outlet closed
2018/04/29 19:36:16.630645 prospector.go:137: INFO Prospector channel stopped because beat is stopping.
2018/04/29 19:36:16.630620 prospector.go:180: INFO Prospector ticker stopped
2018/04/29 19:36:16.630673 prospector.go:232: INFO Stopping Prospector: 12509029797428055740
2018/04/29 19:36:16.630838 crawler.go:112: INFO Crawler stopped
2018/04/29 19:36:16.630853 spooler.go:101: INFO Stopping spooler
2018/04/29 19:36:16.630892 registrar.go:291: INFO Stopping Registrar
2018/04/29 19:36:16.630903 registrar.go:248: INFO Ending Registrar
2018/04/29 19:36:16.642707 metrics.go:51: INFO Total non-zero values: filebeat.harvester.closed=5 filebeat.harvester.started=5 libbeat.es.publish.read_errors=5 libbeat.es.publish.write_bytes=615 libbeat.publisher.published_events=2043 registrar.writes=1
2018/04/29 19:36:16.642736 metrics.go:52: INFO Uptime: 29.455371173s
2018/04/29 19:36:16.642744 beat.go:225: INFO filebeat stopped.

fuseadmin@a0110pcsgesb01 filebeat-5.5.0-linux-x86_64]$ cat filebeat.yml
###################### Filebeat Configuration Example #########################

This file is an example configuration file highlighting only the most common

options. The filebeat.full.yml file from the same directory contains all the

supported options with more comments. You can use it as a reference.

You can find the full configuration reference here:

https://www.elastic.co/guide/en/beats/filebeat/index.html

#=========================== Filebeat prospectors =============================

filebeat.prospectors:

Each - is a prospector. Most options can be set at the prospector level, so

you can use different prospectors for various configurations.

Below are the prospector specific configurations.

  • input_type: log

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    - /var/log/*.log

      - /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service01/data/log/fuse.log
      - /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service02/data/log/fuse.log
      - /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service03/data/log/fuse.log
      - /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service04/data/log/fuse.log
      - /app/PROD/jboss/fuse/fuse01/instances/csg2-01-service05/data/log/fuse.log
    

#- c:\programdata\elasticsearch\logs*

Exclude lines. A list of regular expressions to match. It drops the lines that are

matching any regular expression from the list.

#exclude_lines: ["^DBG"]

Include lines. A list of regular expressions to match. It exports the lines that are

matching any regular expression from the list.

#include_lines: ["^ERR", "^WARN"]

Exclude files. A list of regular expressions to match. Filebeat drops the files that

are matching any regular expression from the list. By default, no files are dropped.

#exclude_files: [".gz$"]

Optional additional fields. These field can be freely picked

to add additional information to the crawled log files for filtering

#fields:

level: debug

review: 1

Multiline options

Mutiline can be used for log messages spanning multiple lines. This is common

for Java Stack Traces or C-Line Continuation

The regexp Pattern that has to be matched. The example pattern matches all lines starting with [

multiline.pattern: ^[

Defines if the pattern set under pattern should be negated or not. Default is false.

multiline.negate: false

Match can be set to "after" or "before". It is used to define if lines should be append to a pattern

that was (not) matched before or after or as long as a pattern is not matched based on negate.

Note: After is the equivalent to previous and before is the equivalent to to next in Logstash

multiline.match: after

#================================ General =====================================

The name of the shipper that publishes the network data. It can be used to group

all the transactions sent by a single shipper in the web interface.

#name:

The tags of the shipper are included in their own field with each

transaction published.

#tags: ["service-X", "web-tier"]

Optional fields that you can specify to add additional information to the

output.

#fields:

env: staging

#================================ Outputs =====================================

Configure what outputs to use when sending the data collected by the beat.

Multiple outputs may be used.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

#hosts: ["localhost:9200"]

Optional protocol and basic auth credentials.

#protocol: "https"
#username: "elastic"
#password: "changeme"

#----------------------------- Logstash output --------------------------------
#output.logstash:

The Logstash hosts

hosts: ["10.89.13.26:5044"]

Optional SSL. By default is off.

List of root certificates for HTTPS server verifications

#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

Certificate for SSL client authentication

#ssl.certificate: "/etc/pki/client/cert.pem"

Client Certificate Key

#ssl.key: "/etc/pki/client/cert.key"

#================================ Logging =====================================

Sets log level. The default log level is info.

Available log levels are: critical, error, warning, info, debug

#logging.level: debug

At debug level, you can selectively enable logging only for some components.

To enable all selectors use ["*"]. Examples of other selectors are "beat",

"publish", "service".

logging.selectors: ["*"]

It can't to connect to the Logstash server.

It can't to connect to the Logstash server.

Not quite correct; it's able to connect but the other party in the connection (i.e. Logstash) terminates the connection.

The reason is that the output.logstash: line is commented out but the output.elasticsearch: line isn't, so Filebeat attempts to treat 10.89.13.26:5044 as an Elasticsearch server.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.