Unable to start Filebeat 7.9.2 , Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch

Hi ,
I just installed ELK on a server, filebeat agent (ver 7.9.2) on another,
OS- CentOS 7
Started elastic, logstash & kibana on a server,
but unable to start filebeat agent from another server, where im trying to send to logstash, disabled elasticsearch

Below is the err:
[root@myhostname tmp]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
Active: failed (Result: start-limit) since Wed 2020-10-21 08:44:04 MDT; 2min 14s ago
Docs: https://www.elastic.co/products/beats/filebeat
Process: 40168 ExecStart=/usr/share/filebeat/bin/filebeat --environment systemd $BEAT_LOG_OPTS $BEAT_CONFIG_OPTS $BEAT_PATH_OPTS (code=exited, status=1/FAILURE)
Main PID: 40168 (code=exited, status=1/FAILURE)

Oct 21 08:44:04 myhostname systemd[1]: filebeat.service: main process exited, code=exited, status=1/FAILURE
Oct 21 08:44:04 myhostname systemd[1]: Unit filebeat.service entered failed state.
Oct 21 08:44:04 myhostname systemd[1]: filebeat.service failed.
Oct 21 08:44:04 myhostname systemd[1]: filebeat.service holdoff time over, scheduling restart.
Oct 21 08:44:04 myhostname systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 21 08:44:04 myhostname systemd[1]: start request repeated too quickly for filebeat.service
Oct 21 08:44:04 myhostname systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 21 08:44:04 myhostname systemd[1]: Unit filebeat.service entered failed state.
Oct 21 08:44:04 myhostname systemd[1]: filebeat.service failed.
[root@myhostname tmp]#

Below is the filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    - /data01/st2/var/logs/whiteboard/*.log
    - /data01/st2/var/logs/ui-server/*.log
    - /data01/st2/var/logs/aspose/*.log
    - /data01/st2/var/logs/api/*.log
    - /data01/st2/var/logs/api/cron/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch
  # Array of hosts to connect to.
  #hosts: ["10.34.22.20:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash
  # The Logstash hosts
  hosts: ["10.34.22.20:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: [/etc/pki/tls/certs/logstash-forwarder.crt]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Below is the log output
cat /var/log/filebeat/filebeat
2020-10-21T07:42:12.157-0600 INFO instance/beat.go:640 Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2020-10-21T07:42:12.158-0600 INFO instance/beat.go:648 Beat ID: a78a3e05-0101-451b-a83a-c5f038159a89

Below is the err msg

[root@myhostname tmp]# filebeat test config
Exiting: error loading config file: yaml: line 166: could not find expected ':'
[root@myhostname tmp]#

But line 166 looks perfect to me!

# ------------------------------ Logstash Output -------------------------------
output.logstash
  # The Logstash hosts
  hosts: ["10.34.22.20:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: [/etc/pki/tls/certs/logstash-forwarder.crt]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

Please help to fix the issue
TIA

Can you try with that path between quotes?

ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Thankyou Mario!
Below is the latest status:

[root@myhostname tmp]# sudo systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: failed (Result: start-limit) since Fri 2020-10-23 02:15:34 MDT; 22min ago
     Docs: https://www.elastic.co/products/beats/filebeat
 Main PID: 4440 (code=exited, status=1/FAILURE)

Oct 23 02:15:34 myhostname systemd[1]: filebeat.service failed.
Oct 23 02:15:34 myhostname systemd[1]: filebeat.service holdoff time over, scheduling restart.
Oct 23 02:15:34 myhostname systemd[1]: Stopped Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 23 02:15:34 myhostname systemd[1]: start request repeated too quickly for filebeat.service
Oct 23 02:15:34 myhostname systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 23 02:15:34 myhostname systemd[1]: Unit filebeat.service entered failed state.
Oct 23 02:15:34 myhostname systemd[1]: filebeat.service failed.
Oct 23 02:15:35 myhostname systemd[1]: start request repeated too quickly for filebeat.service
Oct 23 02:15:35 myhostname systemd[1]: Failed to start Filebeat sends log files to Logstash or directly to Elasticsearch..
Oct 23 02:15:35 myhostname systemd[1]: filebeat.service failed.

current config file

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

- type: log

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    - /data01/st2/var/logs/whiteboard/*.log
    - /data01/st2/var/logs/ui-server/*.log
    - /data01/st2/var/logs/aspose/*.log
    - /data01/st2/var/logs/api/*.log
    - /data01/st2/var/logs/api/cron/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

  ### Multiline options

  # Multiline can be used for log messages spanning multiple lines. This is common
  # for Java Stack Traces or C-Line Continuation

  # The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
  #multiline.pattern: ^\[

  # Defines if the pattern set under pattern should be negated or not. Default is false.
  #multiline.negate: false

  # Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
  # that was (not) matched before or after or as long as a pattern is not matched based on negate.
  # Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
  #multiline.match: after

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch
  # Array of hosts to connect to.
  #hosts: ["myIP:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
  hosts: ["myIP:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  ssl.certificate_authorities: [/etc/pki/tls/certs/logstash-forwarder.crt]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

Can you please help with this!

Yes, tried adding Quote as well.

  ssl.certificate_authorities: ["/etc/pki/tls/certs/logstash-forwarder.crt"]

Some logs for your clarity..

[root@myhostname tmp]# filebeat  test config
Config OK
[root@myhostname tmp]# filebeat  test output --path.config /etc/filebeat
logstash: 10.34.22.20:5044...
  connection...
    parse host... OK
    dns lookup... OK
    addresses: 10.34.22.20
    dial up... OK
  TLS...
    security: server's certificate chain verification is enabled
    handshake... OK
    TLS version: TLSv1.2
    dial up... OK
  talk to server... OK

Tried single quote too..

  ssl.certificate_authorities: ['/etc/pki/tls/certs/logstash-forwarder.crt']

Ok, this is gonna be tricky to troubleshoot. Give a try to write https:// in front of the logstash host, just in case.

I was thinking it was some connection issue but maybe not. Try to setup the output to console in the filebeat yml:

output.console:
    pretty: true

And comment out any other output. We will see if filebeat starts and prints to console at least. You can also get more debugging information if you start filebeat manuall with filebeat -e -d "*"

Thankyou!

PFB the debug output

filebeat -e -d "*"
2020-10-23T07:03:54.515-0600    INFO    instance/beat.go:640    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2020-10-23T07:03:54.515-0600    DEBUG   [beat]  instance/beat.go:692    Beat metadata path: /var/lib/filebeat/meta.json
2020-10-23T07:03:54.515-0600    INFO    instance/beat.go:648    Beat ID: a78a3e05-0101-451b-a83a-c5f038159a89
2020-10-23T07:03:54.520-0600    DEBUG   [conditions]    conditions/conditions.go:98     New condition contains: map[]
2020-10-23T07:03:54.520-0600    DEBUG   [conditions]    conditions/conditions.go:98     New condition !contains: map[]
2020-10-23T07:03:54.520-0600    DEBUG   [docker]        docker/client.go:48     Docker client will negotiate the API version on the first request.
2020-10-23T07:03:54.521-0600    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:126     add_cloud_metadata: starting to fetch metadata, timeout=3s
2020-10-23T07:03:54.549-0600    DEBUG   [add_docker_metadata]   add_docker_metadata/add_docker_metadata.go:90   add_docker_metadata: docker environment detected
2020-10-23T07:03:54.549-0600    DEBUG   [add_docker_metadata.docker]    docker/watcher.go:202   Start docker containers scanner
2020-10-23T07:03:54.549-0600    DEBUG   [add_docker_metadata.docker]    docker/watcher.go:346   List containers
2020-10-23T07:03:54.555-0600    DEBUG   [add_docker_metadata.docker]    docker/watcher.go:252   Fetching events since 1603458234
2020-10-23T07:03:54.555-0600    DEBUG   [kubernetes]    add_kubernetes_metadata/kubernetes.go:138       Could not create kubernetes client using in_cluster config: unable to build kube config due to error: invalid configuration: no configuration has been provided, try setting KUBERNETES_MASTER environment variable  {"libbeat.processor": "add_kubernetes_metadata"}
2020-10-23T07:03:54.555-0600    DEBUG   [add_docker_metadata.docker.bus-docker] bus/bus.go:88   map[container:0xc0005061c0 start:true]  {"libbeat.bus": "docker"}
2020-10-23T07:03:54.555-0600    DEBUG   [add_docker_metadata.docker.bus-docker] bus/bus.go:88   map[container:0xc000506230 start:true]  {"libbeat.bus": "docker"}
2020-10-23T07:03:54.555-0600    DEBUG   [add_docker_metadata.docker.bus-docker] bus/bus.go:88   map[container:0xc0005062a0 start:true]  {"libbeat.bus": "docker"}
2020-10-23T07:03:54.555-0600    DEBUG   [add_docker_metadata.docker.bus-docker] bus/bus.go:88   map[container:0xc000506380 start:true]  {"libbeat.bus": "docker"}
2020-10-23T07:03:57.521-0600    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:169     add_cloud_metadata: timed-out waiting for all responses
2020-10-23T07:03:57.521-0600    DEBUG   [add_cloud_metadata]    add_cloud_metadata/providers.go:129     add_cloud_metadata: fetchMetadata ran for 3.000239337s
2020-10-23T07:03:57.521-0600    INFO    [add_cloud_metadata]    add_cloud_metadata/add_cloud_metadata.go:89     add_cloud_metadata: hosting provider type not detected.
2020-10-23T07:03:57.521-0600    DEBUG   [processors]    processors/processor.go:101     Generated new processors: add_host_metadata=[netinfo.enabled=[true], cache.ttl=[5m0s]], condition=!contains: map[], add_cloud_metadata={}, add_docker_metadata=[match_fields=[] match_pids=[process.pid, process.ppid]], add_kubernetes_metadata
2020-10-23T07:03:57.522-0600    DEBUG   [seccomp]       seccomp/seccomp.go:117  Loading syscall filter  {"seccomp_filter": {"no_new_privs":true,"flag":"tsync","policy":{"default_action":"errno","syscalls":[{"names":["accept","accept4","access","arch_prctl","bind","brk","chmod","chown","clock_gettime","clone","close","connect","dup","dup2","epoll_create","epoll_create1","epoll_ctl","epoll_pwait","epoll_wait","exit","exit_group","fchdir","fchmod","fchmodat","fchown","fchownat","fcntl","fdatasync","flock","fstat","fstatfs","fsync","ftruncate","futex","getcwd","getdents","getdents64","geteuid","getgid","getpeername","getpid","getppid","getrandom","getrlimit","getrusage","getsockname","getsockopt","gettid","gettimeofday","getuid","inotify_add_watch","inotify_init1","inotify_rm_watch","ioctl","kill","listen","lseek","lstat","madvise","mincore","mkdirat","mmap","mprotect","munmap","nanosleep","newfstatat","open","openat","pipe","pipe2","poll","ppoll","pread64","pselect6","pwrite64","read","readlink","readlinkat","recvfrom","recvmmsg","recvmsg","rename","renameat","rt_sigaction","rt_sigprocmask","rt_sigreturn","sched_getaffinity","sched_yield","sendfile","sendmmsg","sendmsg","sendto","set_robust_list","setitimer","setsockopt","shutdown","sigaltstack","socket","splice","stat","statfs","sysinfo","tgkill","time","tkill","uname","unlink","unlinkat","wait4","waitid","write","writev"],"action":"allow"}]}}}
2020-10-23T07:03:57.522-0600    INFO    [seccomp]       seccomp/seccomp.go:124  Syscall filter successfully installed
2020-10-23T07:03:57.522-0600    INFO    [beat]  instance/beat.go:976    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "a78a3e05-0101-451b-a83a-c5f038159a89"}}}
2020-10-23T07:03:57.522-0600    INFO    [beat]  instance/beat.go:985    Build info      {"system_info": {"build": {"commit": "7aab6a9659749802201db8020c4f04b74cec2169", "libbeat": "7.9.3", "time": "2020-10-16T09:16:16.000Z", "version": "7.9.3"}}}
2020-10-23T07:03:57.522-0600    INFO    [beat]  instance/beat.go:988    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.14.7"}}}
2020-10-23T07:03:57.527-0600    INFO    [beat]  instance/beat.go:992    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-22T20:48:03-06:00","containerized":false,"name":"dn1udstmgapp05","ip":["127.0.0.1/8","10.34.22.78/23","172.17.0.1/16"],"kernel_version":"3.10.0-1062.18.1.el7.x86_64","mac":["00:50:56:ab:23:d7","02:42:f8:b7:23:f9","82:ef:a4:94:c3:cb","3a:1e:a1:08:7b:e2","86:73:fb:b0:df:90","92:62:59:8e:c4:bf"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":8,"patch":2003,"codename":"Core"},"timezone":"MDT","timezone_offset_sec":-21600,"id":"7a0e74447d9b4b27a6aaa85c547f2b51"}}}
2020-10-23T07:03:57.527-0600    INFO    [beat]  instance/beat.go:1021   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/tmp", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 13277, "ppid": 10468, "seccomp": {"mode":"filter","no_new_privs":true}, "start_time": "2020-10-23T07:03:54.320-0600"}}}
2020-10-23T07:03:57.528-0600    INFO    instance/beat.go:299    Setup Beat: filebeat; Version: 7.9.3
2020-10-23T07:03:57.528-0600    DEBUG   [beat]  instance/beat.go:325    Initializing output plugins
2020-10-23T07:03:57.529-0600    DEBUG   [tls]   tlscommon/tls.go:155    Successfully loaded CA certificate: /etc/pki/tls/certs/logstash-forwarder.crt
2020-10-23T07:03:57.529-0600    DEBUG   [publisher]     pipeline/consumer.go:148        start pipeline event consumer
2020-10-23T07:03:57.529-0600    INFO    [publisher]     pipeline/module.go:113  Beat name: dn1udstmgapp05
2020-10-23T07:03:57.530-0600    WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.
2020-10-23T07:03:57.531-0600    INFO    [monitoring]    log/log.go:118  Starting metrics logging every 30s
2020-10-23T07:03:57.531-0600    INFO    kibana/client.go:119    Kibana url: http://localhost:5601
2020-10-23T07:03:57.537-0600    INFO    [monitoring]    log/log.go:153  Total non-zero metrics  {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":70,"time":{"ms":74}},"total":{"ticks":140,"time":{"ms":148},"value":140},"user":{"ticks":70,"time":{"ms":74}}},"handles":{"limit":{"hard":4096,"soft":1024},"open":10},"info":{"ephemeral_id":"8dc51ec5-55fe-409d-abdb-134debb1ee2c","uptime":{"ms":3084}},"memstats":{"gc_next":14847328,"memory_alloc":8352320,"memory_total":36416008,"rss":43163648},"runtime":{"goroutines":18}},"filebeat":{"harvester":{"open_files":0,"running":0}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"logstash"},"pipeline":{"clients":0,"events":{"active":0}}},"registrar":{"states":{"current":0}},"system":{"cpu":{"cores":2},"load":{"1":1.63,"15":1.04,"5":1.31,"norm":{"1":0.815,"15":0.52,"5":0.655}}}}}}
2020-10-23T07:03:57.537-0600    INFO    [monitoring]    log/log.go:154  Uptime: 3.084739791s
2020-10-23T07:03:57.537-0600    INFO    [monitoring]    log/log.go:131  Stopping metrics logging.
2020-10-23T07:03:57.537-0600    INFO    instance/beat.go:447    filebeat stopped.
2020-10-23T07:03:57.537-0600    ERROR   instance/beat.go:951    Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response: .
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to http://localhost:5601/api/status fails: fail to execute the HTTP GET request: Get "http://localhost:5601/api/status": dial tcp 127.0.0.1:5601: connect: connection refused. Response: .

unable to add

output.console:
    pretty: true

as it says duplicate, but not seeing any such entry.

Please ignore that, added output.console..
but no luck :frowning_face:
Below is the log....

cat /var/log/filebeat/filebeat
2020-10-23T07:11:01.846-0600    INFO    instance/beat.go:640    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat]
2020-10-23T07:11:01.847-0600    INFO    instance/beat.go:648    Beat ID: a78a3e05-0101-451b-a83a-c5f038159a89
2020-10-23T07:11:01.885-0600    INFO    [beat]  instance/beat.go:976    Beat info       {"system_info": {"beat": {"path": {"config": "/etc/filebeat", "data": "/var/lib/filebeat", "home": "/usr/share/filebeat", "logs": "/var/log/filebeat"}, "type": "filebeat", "uuid": "a78a3e05-0101-451b-a83a-c5f038159a89"}}}
2020-10-23T07:11:01.885-0600    INFO    [beat]  instance/beat.go:985    Build info      {"system_info": {"build": {"commit": "7aab6a9659749802201db8020c4f04b74cec2169", "libbeat": "7.9.3", "time": "2020-10-16T09:16:16.000Z", "version": "7.9.3"}}}
2020-10-23T07:11:01.885-0600    INFO    [beat]  instance/beat.go:988    Go runtime info {"system_info": {"go": {"os":"linux","arch":"amd64","max_procs":2,"version":"go1.14.7"}}}
2020-10-23T07:11:01.889-0600    INFO    [beat]  instance/beat.go:992    Host info       {"system_info": {"host": {"architecture":"x86_64","boot_time":"2020-04-22T20:48:03-06:00","containerized":false,"name":"dn1udstmgapp05","ip":["127.0.0.1/8","10.34.22.78/23","172.17.0.1/16"],"kernel_version":"3.10.0-1062.18.1.el7.x86_64","mac":["00:50:56:ab:23:d7","02:42:f8:b7:23:f9","82:ef:a4:94:c3:cb","3a:1e:a1:08:7b:e2","86:73:fb:b0:df:90","92:62:59:8e:c4:bf"],"os":{"family":"redhat","platform":"centos","name":"CentOS Linux","version":"7 (Core)","major":7,"minor":8,"patch":2003,"codename":"Core"},"timezone":"MDT","timezone_offset_sec":-21600,"id":"7a0e74447d9b4b27a6aaa85c547f2b51"}}}
2020-10-23T07:11:01.890-0600    INFO    [beat]  instance/beat.go:1021   Process info    {"system_info": {"process": {"capabilities": {"inheritable":null,"permitted":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"effective":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"bounding":["chown","dac_override","dac_read_search","fowner","fsetid","kill","setgid","setuid","setpcap","linux_immutable","net_bind_service","net_broadcast","net_admin","net_raw","ipc_lock","ipc_owner","sys_module","sys_rawio","sys_chroot","sys_ptrace","sys_pacct","sys_admin","sys_boot","sys_nice","sys_resource","sys_time","sys_tty_config","mknod","lease","audit_write","audit_control","setfcap","mac_override","mac_admin","syslog","wake_alarm","block_suspend"],"ambient":null}, "cwd": "/tmp", "exe": "/usr/share/filebeat/bin/filebeat", "name": "filebeat", "pid": 18909, "ppid": 10468, "seccomp": {"mode":"disabled","no_new_privs":false}, "start_time": "2020-10-23T07:11:01.640-0600"}}}
2020-10-23T07:11:01.890-0600    INFO    instance/beat.go:299    Setup Beat: filebeat; Version: 7.9.3
2020-10-23T07:11:01.892-0600    INFO    [publisher]     pipeline/module.go:113  Beat name: dn1udstmgapp05
2020-10-23T07:11:01.893-0600    WARN    beater/filebeat.go:178  Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.

Just tried upgrading to 7.9.3 as well...
same error..

After adding the below, filebeat started good, but can you please confirm whether that is right to add Kibana entries in filebeat.yml as below

setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "myelasticIP:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  space.id: "myspaceName"

But what i need is, Filebeat to send logs to logstash, then logstash to elastic, Kibana.

Will be helpful, if you've any document for that as well.
As of now im following the below

  1. https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7#install-kibana
  2. https://www.elastic.co/guide/en/beats/filebeat/7.9/filebeat-installation-configuration.html
    (RPM installation)

Can someone help with this..

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.