Don’t see logs in kibana stack monitoring

Hello There!
I have seen similar topic but it is closed for two years.
I can not see STRUCTURED LOGS in Kibana.
ELK-STRUCT-LOGS

I am using filebeat with module elasticsearch and this is module's config:

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-elasticsearch.html

- module: elasticsearch
  # Server log
  server:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
#      - /var/log/elasticsearch/*.log          # Plain text logs
      - /var/log/elasticsearch/*_server.json  # JSON logs
      - /var/log/kibana/*.log
  gc:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
      - /var/log/elasticsearch/gc.log.[0-9]*
      - /var/log/elasticsearch/gc.log

  audit:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
 #     - /var/log/elasticsearch/*_access.log  # Plain text logs
      - /var/log/elasticsearch/*_audit.json  # JSON logs

  slowlog:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
  #    - /var/log/elasticsearch/*_index_search_slowlog.log     # Plain text logs
  #    - /var/log/elasticsearch/*_index_indexing_slowlog.log   # Plain text logs
      - /var/log/elasticsearch/*_index_search_slowlog.json    # JSON logs
      - /var/log/elasticsearch/*_index_indexing_slowlog.json  # JSON logs

  deprecation:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
   #   - /var/log/elasticsearch/*_deprecation.log   # Plain text logs
      - /var/log/elasticsearch/*_deprecation.json  # JSON logs

I have tried to discover where can be an issue by disabling and enabling sections but still no clue.
Could you please give any help?

I can provide other configs if needed.

Hello @cheshirecat !

Have read through this discuss issue, it steps through a number of things to check:

If you can provide the information that was requested there (running the query against indices to see if you have data, checking if your ingest pipelines are in place) it would be easier to help.

Hello,
For this query:

POST filebeat-*/_search?filter_path=hits.hits._source.event.dataset,hits.hits._source.@timestamp,hits.hits._source.elasticsearch
{
  "size": 10,
  "sort": [
    {
      "@timestamp": {
        "order": "desc"
      }
    }
  ],
  "collapse": {
    "field": "event.dataset"
  }
}

I got:

{
  "hits" : {
    "hits" : [
      {
        "_source" : {
          "@timestamp" : "2022-08-12T12:26:53.709Z"
        }
      },
      {
        "_source" : {
          "@timestamp" : "2022-08-11T12:19:03.000+02:00",
          "event" : {
            "dataset" : "system.syslog"
          }
        }
      },
      {
        "_source" : {
          "@timestamp" : "2022-08-11T12:17:01.000+02:00",
          "event" : {
            "dataset" : "system.auth"
          }
        }
      },
      {
        "_source" : {
          "@timestamp" : "2022-08-11T10:03:12.211Z",
          "event" : {
            "dataset" : "threatintel.malwarebazaar"
          }
        }
      },
      {
        "_source" : {
          "@timestamp" : "2022-08-11T09:55:34.935Z",
          "event" : {
            "dataset" : "threatintel.abusemalware"
          }
        }
      },
      {
        "_source" : {
          "@timestamp" : "2022-08-11T09:45:34.271Z",
          "event" : {
            "dataset" : "threatintel.abuseurl"
          }
        }
      }
    ]
  }
}

And for:

POST filebeat-*/_search
{
  "size": 0,
  "sort": {
    "@timestamp": {
      "order": "desc"
    }
  },
  "query": {
    "bool": {
      "filter": [
        {
          "term": {
            "service.type": "elasticsearch"
          }
        },
        {
          "range": {
            "@timestamp": {
              "format": "epoch_millis",
              "gte": 1595594404844,
              "lte": 1595598004844
            }
          }
        },
        {
          "term": {
            "elasticsearch.cluster.uuid": "{cluster_uuid}"
          }
        }
      ]
    }
  },
  "aggs": {
    "types": {
      "terms": {
        "field": "event.dataset"
      },
      "aggs": {
        "levels": {
          "terms": {
            "field": "log.level"
          }
        }
      }
    }
  }
}

I got response:

{
  "took" : 11,
  "timed_out" : false,
  "_shards" : {
    "total" : 4,
    "successful" : 3,
    "skipped" : 3,
    "failed" : 1,
    "failures" : [
      {
        "shard" : 0,
        "index" : "filebeat-7.17.5-2022.08.09",
        "node" : "0o62NIo5RASWB-ndygBy2w",
        "reason" : {
          "type" : "illegal_argument_exception",
          "reason" : "Text fields are not optimised for operations that require per-document field data like aggregations and sorting, so these operations are disabled by default. Please use a keyword field instead. Alternatively, set fielddata=true on [event.dataset] in order to load field data by uninverting the inverted index. Note that this can use significant memory."
        }
      }
    ]
  },
  "hits" : {
    "total" : {
      "value" : 0,
      "relation" : "eq"
    },
    "max_score" : 0.0,
    "hits" : [ ]
  }
}

I'm a newbie here, so please - be patient as I sometimes have to learn before taking actions.

I'm a newbie here, so please - be patient as I sometimes have to learn before taking actions.

Welcome to the community :slight_smile: apologies if I seemed rude! It does look like your data is not being parsed correctly based on the last search you did.

A few further things to check:

Hello,
I hope, I made it as you asked:

{
  "filebeat-7.17.5-elasticsearch-server-pipeline" : {
    "description" : "Pipeline for parsing elasticsearch server logs",
    "processors" : [
      {
        "set" : {
          "value" : "{{_ingest.timestamp}}",
          "field" : "event.ingested"
        }
      },
      {
        "rename" : {
          "field" : "@timestamp",
          "target_field" : "event.created"
        }
      },
      {
        "grok" : {
          "field" : "message",
          "patterns" : [
            "^%{CHAR:first_char}"
          ],
          "pattern_definitions" : {
            "CHAR" : "."
          }
        }
      },
      {
        "pipeline" : {
          "name" : "filebeat-7.17.5-elasticsearch-server-pipeline-plaintext",
          "if" : "ctx.first_char != '{'"
        }
      },
      {
        "pipeline" : {
          "if" : "ctx.first_char == '{'",
          "name" : "filebeat-7.17.5-elasticsearch-server-pipeline-json"
        }
      },
      {
        "script" : {
          "source" : "if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.observation_duration != null) {\n  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.seconds_unit) {\n    ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_s;\n  }\n  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.milliseconds_unit) {\n    ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time;\n  }\n  if (ctx.elasticsearch.server.gc.observation_duration.unit == params.minutes_unit) {\n    ctx.elasticsearch.server.gc.observation_duration.ms = ctx.elasticsearch.server.gc.observation_duration.time * params.ms_in_one_m;\n  }\n} if (ctx.elasticsearch.server.gc != null && ctx.elasticsearch.server.gc.collection_duration != null) {\n  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.seconds_unit) {\n    ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_s;\n  }\n  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.milliseconds_unit) {\n    ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time;\n  }\n  if (ctx.elasticsearch.server.gc.collection_duration.unit == params.minutes_unit) {\n    ctx.elasticsearch.server.gc.collection_duration.ms = ctx.elasticsearch.server.gc.collection_duration.time * params.ms_in_one_m;\n  }\n}",
          "params" : {
            "minutes_unit" : "m",
            "seconds_unit" : "s",
            "milliseconds_unit" : "ms",
            "ms_in_one_s" : 1000,
            "ms_in_one_m" : 60000
          },
          "lang" : "painless"
        }
      },
      {
        "set" : {
          "field" : "event.kind",
          "value" : "event"
        }
      },
      {
        "set" : {
          "field" : "event.category",
          "value" : "database"
        }
      },
      {
        "script" : {
          "source" : "def errorLevels = ['FATAL', 'ERROR']; if (ctx?.log?.level != null) {\n  if (errorLevels.contains(ctx.log.level)) {\n    ctx.event.type = 'error';\n  } else {\n    ctx.event.type = 'info';\n  }\n}",
          "lang" : "painless"
        }
      },
      {
        "set" : {
          "ignore_empty_value" : true,
          "field" : "host.name",
          "value" : "{{elasticsearch.node.name}}"
        }
      },
      {
        "set" : {
          "field" : "host.id",
          "value" : "{{elasticsearch.node.id}}",
          "ignore_empty_value" : true
        }
      },
      {
        "remove" : {
          "field" : [
            "elasticsearch.server.gc.collection_duration.time",
            "elasticsearch.server.gc.collection_duration.unit",
            "elasticsearch.server.gc.observation_duration.time",
            "elasticsearch.server.gc.observation_duration.unit"
          ],
          "ignore_missing" : true
        }
      },
      {
        "remove" : {
          "ignore_missing" : true,
          "field" : [
            "elasticsearch.server.timestamp",
            "elasticsearch.server.@timestamp"
          ]
        }
      },
      {
        "remove" : {
          "field" : [
            "first_char"
          ]
        }
      }
    ],
    "on_failure" : [
      {
        "set" : {
          "field" : "error.message",
          "value" : "{{ _ingest.on_failure_message }}"
        }
      }
    ]
  }
}

Never happened - I'm sorry for misunderstanding.

For installation I used ES repo 7.x and than apt-get - this way it's easier to use Ansible scripts.
This is my ES output (I always test everything with ES output before I start to use Logstash):

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["my_ip_address:10000"]

  # Protocol - either `http` (default) or `https`.
  protocol: "http"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

Kibana settings:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "my_ip_address:8801"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

Logging section:

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: info

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
logging.selectors: ["*"]

Chris

Any ideas guys?

@JLeysens maybe you got one?

Hi @cheshirecat Welcome to the community apologies you are having issues.

And to be clear...

  • What Version of the Stack are you on?

  • And you are just trying to get "Stack Logs" in the monitoring?

  • Also is Filebeat and Elasticsearch and Kibana all on the same node? (they do not need to be but filebeat needs to be where Kibana and Elasticsearch are)

  • Also you did not show the filebeat.yml

if I get a chance later or over the weekend I will take a look. If I remember I pretty sure you are missing a yml setting or there could be a bug where you also have to ship metrics as well (though I think that was fixed)

Like did you set from here

You definitely need this this is probably your biggest issue.
Set xpack.monitoring.collection.enabled to true on the production cluster in the elasticsearch.yml . By default, it is disabled (false ).

xpack.monitoring.collection.enabled: true

Also those field mapping question is usually because you did not run setup after you configure the modules.d/elasticsearch.yml

filebeat setup -e

Also you don't put kibana logs in the elasticsearch module that is not correct,

- /var/log/kibana/*.log

And finally when you show out of order snippets of code it is much harder to debug than just showing the full filebeat.yml, modules.d/elasticsearch.yml as well as probably elasticsearch.yml

Ok I tested.. I got monitoring to work by simply

adding this to the elasticsearch elasticsearch.yml

xpack.monitoring.collection.enabled: true

Then I enabled in modules.d/elasticsearch.yml pointed to my

 module: elasticsearch
  # Server log
  server:
    enabled: true

    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths: ["/Users/sbrown/workspace/elastic-install/8.3.3/elasticsearch-8.3.3/logs/local_server.json"]

Then I ran
filebeat. setup -e

Then I started filebeat

filebeat -e

Then the logs showed up in monitoring

Hello :slight_smile:

There is nothing to apologie - it's just life :smiley: Maybe my config is not as good as I thought :wink:

Version is:

Yes, and all I see is information there were no structured logs.

Yes, they are.

Done. After restarting elasticsearch, making changes to filebeat/modules.d/elasticsearch.yml and running

filebeat. setup -e

and than

systemctl start filebeat

there is a "little" change. Now I can see this message:
es-logs
But changing time filter does not make a difference.

This is my modules.d/elasticsearch.yml:

# Module: elasticsearch
# Docs: https://www.elastic.co/guide/en/beats/filebeat/7.17/filebeat-module-elasticsearch.html

- module: elasticsearch
  # Server log
  server:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
#      - /var/log/elasticsearch/*.log          # Plain text logs
      - /var/log/elasticsearch/*_server.json  # JSON logs

  gc:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
      - /var/log/elasticsearch/gc.log.[0-9]*
      - /var/log/elasticsearch/gc.log

  audit:
    enabled: true
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
 #     - /var/log/elasticsearch/*_access.log  # Plain text logs
      - /var/log/elasticsearch/*_audit.json  # JSON logs

  slowlog:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
  #    - /var/log/elasticsearch/*_index_search_slowlog.log     # Plain text logs
  #    - /var/log/elasticsearch/*_index_indexing_slowlog.log   # Plain text logs
      - /var/log/elasticsearch/*_index_search_slowlog.json    # JSON logs
      - /var/log/elasticsearch/*_index_indexing_slowlog.json  # JSON logs

  deprecation:
    enabled: false
    # Set custom paths for the log files. If left empty,
    # Filebeat will choose the paths depending on your OS.
    var.paths:
   #   - /var/log/elasticsearch/*_deprecation.log   # Plain text logs
      - /var/log/elasticsearch/*_deprecation.json  # JSON logs

and I know there are log files:

# ls -lah /var/log/elasticsearch/ | grep server
-rw-r--r--  1 elasticsearch elasticsearch  54K 06-21 11:45 elasticsearch_server.json
-rw-r--r--  1 elasticsearch elasticsearch  47K 08-19 08:32 ks-es_server.json

This is filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - "/var/log/*.log"
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  fields:
    level: debug
    review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false

# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: node-1-filebeat-network

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["node-1-filebeat-network"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "IP.ADDRESS:PORT"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["IP.ADDRESS:PORT"]

  # Protocol - either `http` (default) or `https`.
  protocol: "http"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
 # hosts: ["IP.ADDRESS:PORT"]

  # Optional SveriSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

  - include_fields:
      fields: ["cpu"]
  - drop_fields:
      fields: ["cpu.user", "cpu.system"]
# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: info

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: true

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

reload.enabled: true
elasticsearch.cluster.uuid: "CLUSTER_ID"

And this is my elasticsearch.yml:

 ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ks-es
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /var/lib/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# By default Elasticsearch is only accessible on localhost. Set a different
# address here to expose this node on the network:
#
network.host: IP.ADDRESS
#
# By default Elasticsearch listens for HTTP traffic on the first free port it
# finds starting at 9200. Set a specific HTTP port here:
#
http.port: PORT
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["IP.ADDRESS"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["IP.ADDRESS"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
# ---------------------------------- Security ----------------------------------
#
#                                 *** WARNING ***
#
# Elasticsearch security features are not enabled by default.
# These features are free, but require configuration changes to enable them.
# This means that users don’t have to provide credentials and can get full access
# to the cluster. Network connections are also not encrypted.
#
# To protect your data, we strongly encourage you to enable the Elasticsearch security features. 
# Refer to the following documentation for instructions.
#
# https://www.elastic.co/guide/en/elasticsearch/reference/7.16/configuring-stack-security.html
node.master: true
node.data: true
transport.port: PORT
discovery.seed_providers: file
xpack.monitoring.collection.enabled: true

Chris

Not sure why you have that enabled?

Those do not correct

And assume the missing #in the very first character is just a typo

There are sensitive values so I don't want to show them.
in my config file PORT is a number of port that I use for transport.

I made it like it's in manual: link.

As I said: I'm a newbie - still learning. Shouldn't I have it enabled? I thought it's necessary.

No ... Disable that...that is a whole additional file harvester... and makes it harder to debug.

So cleanup everything.... delete the filebeat indices and index pattern etc.. etc..

Rerun filebeat setup then start filebeat again...

Then Go to Kibana -> Dev Tools and run

GET _cat/indices?v

And show me those results

I will also redo on same version to see if somethings is different.

Also what do the filebeat logs show? There is probably something usefull in the... please share them.

I'm sorry for late answer - I was sick whole weekend.

Done.

Hopfully done :wink:

Done:

Loaded Ingest pipelines

Done. Results:

#! Elasticsearch built-in security features are not enabled. Without authentication, your cluster could be accessible to anyone. See https://www.elastic.co/guide/en/elasticsearch/reference/7.17/security-minimal-setup.html to enable security.
health status index                             uuid                   pri rep docs.count docs.deleted store.size pri.store.size
green  open   .monitoring-logstash-7-2022.08.18 bqlWRp21SuSU6vcLAlg5cQ   1   1     224512            0     29.8mb         14.9mb
green  open   .monitoring-logstash-7-2022.08.19 unGZ1v0UTxinaYc84-vHKQ   1   1     235143            0     26.8mb         13.3mb
green  open   .monitoring-logstash-7-2022.08.17 aCQA8EKHRqixYY2ywtKHqA   1   1     212319            0     25.9mb         12.9mb
green  open   apm-7.17.5-onboarding-2022.08.16  Qw0sneR1T3eAgIetBoDjHQ   1   1          1            0     16.3kb          8.1kb
green  open   apm-7.17.5-onboarding-2022.08.10  UTnGRDvkTEivznfxJpuszQ   1   1          1            0     16.3kb          8.1kb
green  open   .kibana_7.17.5_001                Q0u9Rcj2Siyt-2S-jpp96w   1   1       7749         2282     24.8mb           12mb
green  open   apm-7.17.5-onboarding-2022.08.11  7MHhQNfyTLWYTdfIyxMyOw   1   1          1            0     16.3kb          8.1kb
green  open   .apm-custom-link                  NivBG4DkSFSwa1txBJtm6g   1   1          0            0       452b           226b
green  open   filebeat-7.17.5-2022.08.23-000001 ZHV25ViqQiaNXk3tV9QMnw   1   1          0            0       452b           226b
green  open   heartbeat-7.17.5                  LHzBTn-SRO26tR54hmuYnQ   1   1      14961            0     12.7mb          6.3mb
green  open   .monitoring-kibana-7-2022.08.19   AeWGxBbyTR6KBeMEKBkVgQ   1   1      51730            0     22.7mb         11.3mb
green  open   .monitoring-kibana-7-2022.08.17   hFxz9wG_SxG5F0-yoaPaoQ   1   1      51800            0     21.5mb         10.7mb
green  open   .monitoring-kibana-7-2022.08.18   1jkqB8iVTa2Avdsff5_qmw   1   1      51802            0     21.7mb         10.8mb
green  open   .apm-agent-configuration          WU-6ZnUKRKK44PMhZ53T2g   1   1          0            0       452b           226b
green  open   .tasks                            KRi--4lIQrSL_xOhA95Cdg   1   1        172            0    372.6kb        177.3kb
green  open   logs-000001                       Jr7Uj-zGRfmalr8qR_gpLQ   1   1          0            0       452b           226b
green  open   transform-test2                   FRYcsmtwRomwfhO0rPf9Jg   1   1          1            0      7.8kb          3.9kb
green  open   metricbeat-7.17.5-2022.08.23      Dnt2_SpJRYWo7bRSs75-Ow   1   1     899489            0    903.8mb        351.2mb
green  open   metricbeat-7.17.5-2022.08.22      o7PP2CqdRsGHFyDHeoUG0Q   1   1    1830325            0      1.3gb        679.3mb
green  open   .monitoring-es-7-mb-2022.08.21    Mnh2DUDTRtm6j1NEvzDo_g   1   1     623978       184764    812.2mb        404.2mb
green  open   .monitoring-es-7-mb-2022.08.20    Y2-fpT0YTy-h10PGJzK8dg   1   1     607818       139043    774.9mb        385.8mb
green  open   log-0001                          2_uhoBXxQEeeD4sS42aJ0Q   1   1          0            0       452b           226b
green  open   metricbeat-7.17.5-2022.08.21      s0heX0HfRzq6Tdk-SkNo2w   1   1    1803115            0      1.3gb        668.6mb
green  open   metricbeat-7.17.5-2022.08.20      rpWDkEb3Sz2B3Z7dcIjWHQ   1   1    1755127            0      1.2gb        646.4mb
green  open   .geoip_databases                  4aKuteb9Ro6evZ0jOff5Qw   1   1         41           36       83mb         41.5mb
green  open   auditbeat-7.17.5                  czO7UpL4QNyuzGfmFT9tvQ   1   1      14237            0     24.3mb         12.1mb
green  open   .monitoring-es-7-mb-2022.08.23    OLnSJsWpTIWL59QI-iyHhA   1   1     306600       168317    514.9mb        306.5mb
green  open   .kibana_task_manager_7.17.4_001   bhHoWY7ITR2IxHEDq-oskQ   1   1         17         2075    660.1kb          330kb
green  open   .monitoring-es-7-mb-2022.08.22    hQf8g53bQNGnco_lkfibEA   1   1     633774       177907    815.5mb        406.3mb
green  open   metricbeat-7.17.5-2022.08.19      ghYePACRTeWyZ9pswsLHZw   1   1    1693201            0      1.3gb        717.8mb
green  open   metricbeat-7.17.5-2022.08.18      o5wP4QLjStygv1x4__28ag   1   1    1645349            0      1.2gb        642.1mb
green  open   .transform-internal-007           t7LZmuGzR9CP4WDC4utGGg   1   1          3            0     46.9kb         23.4kb
green  open   metricbeat-7.17.5-2022.08.17      DUBGc-T4S3Wf9d1vEycqVw   1   1    1423073            0        1gb        553.7mb
green  open   metricbeat-7.17.5-2022.08.16      sDqOCTdARYKZg5fU35bdpA   1   1     987921            0    772.4mb        386.2mb
green  open   .kibana_task_manager_7.17.5_001   CMVJUDyZT-ujyjIuyXUqIQ   1   1         17       384863     82.8mb         42.7mb
green  open   .monitoring-logstash-7-2022.08.23 nowO2ONmQl2Ov6JbJM54LA   1   1     110941            0     13.5mb          6.8mb
green  open   .monitoring-logstash-7-2022.08.21 6mRPAFqyQqqzaqI6b2t4IA   1   1     250431            0     27.4mb         13.6mb
green  open   .kibana_7.17.4_001                TaSnmWSBTeq-9NH1mhZAPA   1   1       2604          191        7mb          3.5mb
green  open   .monitoring-logstash-7-2022.08.22 34x5jQtQTu-pxhR-rrbgjw   1   1     250367            0     28.2mb         14.1mb
green  open   .monitoring-kibana-7-2022.08.22   v2-jtYCfRGK-fFvDdu0jFA   1   1      51824            0     21.3mb         10.6mb
green  open   .monitoring-kibana-7-2022.08.23   la_P0793TH6mh2paqLnYaA   1   1      22950            0     10.9mb          5.5mb
green  open   .monitoring-logstash-7-2022.08.20 sDXL0aoIT2eGjmD5ErTcZw   1   1     250431            0     27.4mb         13.8mb
green  open   packetbeat-7.17.5                 jUTbqXLGTWmGSPA4eso4JQ   1   1     387212            0      282mb          141mb
green  open   .monitoring-kibana-7-2022.08.20   A0Rnc6N_RgWhSj7r-t8RlQ   1   1      51822            0     21.1mb         10.4mb
green  open   .monitoring-kibana-7-2022.08.21   InW_qytJTYiwS4fiTQQ96Q   1   1      51818            0     21.1mb         10.6mb
green  open   .monitoring-es-7-mb-2022.08.19    943r3y59TGK9eg7eVU9eOQ   1   1     597297       164246      757mb        379.5mb
green  open   .async-search                     qK4QM58TR-q5CJAHKYvM-w   1   1         16          619    280.5kb        135.3kb
green  open   .monitoring-es-7-mb-2022.08.18    D3-_DK6zTTa7i7zVDHl6Rw   1   1     570362       187643    767.5mb        383.7mb
green  open   .monitoring-es-7-mb-2022.08.17    PlfRg6JLRhePi9QkMP1ikQ   1   1     566146       161057    751.4mb        375.7mb

My /var/log/filebeat/filebeat:

2022-08-23T09:46:50.111+0200    INFO    instance/beat.go:685    Home path: [/usr/share/filebeat] Config path: [/etc/filebeat] Data path: [/var/lib/filebeat] Logs path: [/var/log/filebeat] Hostfs Path: [/]
2022-08-23T09:46:50.115+0200    INFO    instance/beat.go:693    Beat ID: aa22cd41-6ab9-4dd3-9f32-c6a99ed01be4
2022-08-23T09:46:53.119+0200    WARN    [add_cloud_metadata]    add_cloud_metadata/provider_aws_ec2.go:79       read token request for getting IMDSv2 token returns empty: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers). No token in the metadata request will be used.

I think I made a mistake so I did what you asked for once again. Changed part of this post is result of GET _cat/indices?v.

Dears @stephenb @JLeysens I probably solved this. I am going to test till tommorow.
If everything will be ok I will post my configs after changes.

1 Like

Dears,
I am trying to do this same but with https/ssl.
I try to do filebeat setup -e but got error message:

2022-08-30T11:57:44.710+0200	INFO	kibana/client.go:180	Kibana url: https://{{ip_address}}:{{port_number}}
2022-08-30T11:57:44.711+0200	ERROR	instance/beat.go:1014	Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to https://{{ip_address}}:{{port_number}}/api/status fails: fail to execute the HTTP GET request: Get "https://{{ip_address}}:{{port_number}}/api/status": dial tcp {{ip_address}}:{{port_number}}: connect: connection refused. Response: .
Exiting: error connecting to Kibana: fail to get the Kibana version: HTTP GET request to https://{{ip_address}}:{{port_number}}/api/status fails: fail to execute the HTTP GET request: Get "https://{{ip_address}}:{{port_number}}/api/status": dial tcp {{ip_address}}:{{port_number}}: connect: connection refused. Response: .

I checked credentials - they're ok, I can log to Kibana using username and password.

What configs would you like to see?

Is kibana really running on https or http ...

Show the filebeat.yml

This is kibana.yml:

server.host: "195.xxx.xxx.xxx"

server.publicBaseUrl: "https://xxx.xxx.xxx.pl"

and my filebeat.yml:

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================

#filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
#- type: filestream

  # Unique ID among all inputs, an ID is required.
 # id: my-filestream-id

  # Change to true to enable this input configuration.
  #enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  #paths:
   # - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
name: node-1-filebeat-shipper

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["node-1-tag"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "https://{{ip_address}}:{{port}}"
  username: "elastic"
  password: "{{pass}}"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}"]

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "{{pass}}"
  ssl.verification_mode: none
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
 # hosts: ["{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}" ]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/filebeat/elastic-stack-ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/kibana/es.crt.pem"

  # Client Certificate Key
  #ssl.key: "/etc/kibana/es.key.pem"
  #ssl.verification_mode: none

# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

setup.setup.ilm.enabled: false

comment that out.

You probably need this under the kibana section if you are using a self signed cert

ssl.verification_mode: none

Look at the docs here

Done.

Done.

Same error. Tomorrow morning I will check docs once again.

So, this is my filebeat.yml:

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: true

  # Period on which files under path should be checked for changes
  reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  index.codec: best_compression
  #_source.enabled: false
# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to gr>
# all the transactions sent by a single shipper in the web interface.
name: node-1-filebeat-shipper

# The tags of the shipper are included in their own field with each
# transaction published.
tags: ["node-1-tag"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loa>
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
setup.dashboards.enabled: true

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana A>
# This requires a Kibana endpoint configuration.
setup.kibana:

  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 56>
  # In case you specify and additional path, the scheme is required: http://loc>
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  host: "https://{{ip_address}}:{{port_number}}"
  username: "elastic"
  password: "{{pass}}"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By defau>
  # the Default Space will be used.
  #space.id:
  ssl.verification_mode: none
# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}", "{{ip_address}}:{{port_number}}>

  # Protocol - either `http` (default) or `https`.
  protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  username: "elastic"
  password: "{{pass}}"
  ssl.verification_mode: none
# ================================= Processors =================================
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~


setup.setup.ilm.enabled: false