Filebeat cannot communicate with Kibana

Hello everyone,

Filebeat is not working with my setup when I try to run ./filebeat setup -e I get the below error message.

I have Elastic and Kibana installed on a single host and they are working well also, I have the snort IPS/IDS installed on a Linux server. I want to ship snort logs to Kibana.

Here is the filebeat.yml

filebeat.inputs:

- type: log
  enabled: false
  paths:
     - /var/log/snort/snort.log.*

setup.kibana:
  host: "172.31.16.100:5601"

output.elasticsearch:
  hosts: ["172.31.16.100:9200"]
  username: "elastic"
  password: "elastic"

172.31.16.100 is pingable from snort and when I run curl -XGET -u elastic:elastic HTTP://172.31.16.100:5601/api/status I get a response.

Elastic and filebeat version is 7.17

Please help!

Have you check that Kibana is running? ps aux and netstat -tlpn should be active port 5601
Might be server.host is set to the local host, check in kibana.yml:

server.host: "0.0.0.0"
server.port: 5601

Use curl to get response from Kibana

Kibana is configured on host 172.31.16.100 and port 5601. It is working fine and it gets some beats from winlogbeat and packetbeat from two Windows machines without any problems. But the problem is just with filebeat on Linux server. The server it self can access Kibana and it gets a response when I do curl. That's the weird thing.

Thanks for your response Rios and appreciate your help!

Did you curl from filebeat host?
Error is: no such host

Yes, I did. The below screenshot shows the output of curl -XGET -u elastic:elastic http://172.31.16.100:5601/api/status

I am sorry I can't copy and paste the output cos I am connecting to a VM on ESXI.

Thank you!

I don't see any error.
Did you forget to copy or to add in filebeat.yml?
setup.kibana.username
setup.kibana.password

I added an entry in the DNS host file for the Kibana and it connected.
But now when I run ./filebeat setup -e I get this error msg:

Loading dashboards (Kibana must be running and reachable)
2022-09-30T18:37:44.597-0400	INFO	kibana/client.go:180	Kibana url: http://elastic:5601
2022-09-30T18:37:44.756-0400	INFO	[add_cloud_metadata]	add_cloud_metadata/add_cloud_metadata.go:101	add_cloud_metadata: hosting provider type not detected.
2022-09-30T18:37:47.582-0400	INFO	kibana/client.go:180	Kibana url: http://elastic:5601

2022-09-30T18:39:06.502-0400	INFO	instance/beat.go:869	Kibana dashboards successfully loaded.
Loaded dashboards
2022-09-30T18:39:06.512-0400	WARN	[cfgwarn]	instance/beat.go:594	DEPRECATED: Setting up ML using Filebeat is going to be removed. Please use the ML app to setup jobs. Will be removed in version: 8.0.0
Setting up ML using setup --machine-learning is going to be removed in 8.0.0. Please use the ML app instead.
See more: https://www.elastic.co/guide/en/machine-learning/current/index.html
It is not possble to load ML jobs into an Elasticsearch 8.0.0 or newer using the Beat.
2022-09-30T18:39:06.513-0400	INFO	[esclientleg]	eslegclient/connection.go:105	elasticsearch url: http://172.31.16.100:9200
2022-09-30T18:39:06.516-0400	INFO	[esclientleg]	eslegclient/connection.go:284	Attempting to connect to Elasticsearch version 7.17.0
2022-09-30T18:39:06.516-0400	INFO	kibana/client.go:180	Kibana url: http://elastic:5601
2022-09-30T18:39:06.560-0400	WARN	fileset/modules.go:463	X-Pack Machine Learning is not enabled
2022-09-30T18:39:06.562-0400	ERROR	instance/beat.go:1015	Exiting: 1 error: error loading config file: invalid config: yaml: line 13: did not find expected key
Exiting: 1 error: error loading config file: invalid config: yaml: line 13: did not find expected key

do you have any idea? here is my filebeat.yml

##################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

  - type: log
    enabled: false
    paths:
      - /var/log/snort/snort.log.*

# filestream is an input for collecting log messages from files.
  - type: filestream

    # Change to true to enable this input configuration.
    enabled: true

    # Paths that should be crawled and fetched. Glob based paths.
    paths:
      - /var/log/snort.log.*

  #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "elastic"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["172.31.16.100:9200"]

  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "elastic"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

setup.ilm.overwrite: true

2022-09-30T18:39:06.562-0400 ERROR instance/beat.go:1015 Exiting: 1 error: error loading config file: invalid config: yaml: line 13: did not find expected key
Exiting: 1 error: error loading config file: invalid config: yaml: line 13: did not find expected key

I have tested your config, couldn't fine any error on FB 8.4 Win. Add logging.level: debug in filebeat.yml might help.
Btw, Kibana can do on IPs.

There were some indentation issues in the snort.yml, I fixed it. Now when I run ./filebeat setup -e I get the below and I don't see any logs in Kibana.

Loaded machine learning job configurations
2022-10-01T12:33:05.507-0400	INFO	[esclientleg]	eslegclient/connection.go:105	elasticsearch url: http://elastic:9200
2022-10-01T12:33:05.510-0400	INFO	[esclientleg]	eslegclient/connection.go:284	Attempting to connect to Elasticsearch version 7.17.0
2022-10-01T12:33:05.513-0400	INFO	[esclientleg]	eslegclient/connection.go:105	elasticsearch url: http://elastic:9200
2022-10-01T12:33:05.516-0400	INFO	[esclientleg]	eslegclient/connection.go:284	Attempting to connect to Elasticsearch version 7.17.0
2022-10-01T12:33:05.526-0400	INFO	[modules]	fileset/pipelines.go:133	Elasticsearch pipeline loaded.	{"pipeline": "filebeat-7.17.0-snort-log-pipeline"}
2022-10-01T12:33:05.526-0400	INFO	cfgfile/reload.go:262	Loading of config files completed.
2022-10-01T12:33:05.527-0400	INFO	[load]	cfgfile/list.go:129	Stopping 1 runners ...
Loaded Ingest pipelines

am I missing any thing here?

Here is the snort.yml.

# Module: snort
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.16/filebeat-module-snort.html

- module: snort
  log:
    enabled: true

    # Set which input to use between udp (default), tcp or file.
    # var.input: file
    # var.syslog_host: localhost
    # var.syslog_port: 9532

    # Set paths for the log files when file input is used.
    var.paths: ["/var/log/snort/*"]

    # Toggle output of non-ECS fields (default true).
    # var.rsa_fields: true

    # Set custom timezone offset.
    # "local" (default) for system timezone.
    # "+02:00" for GMT+02:00
    # var.tz_offset: local

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.