Filebeat 8.8.0 error loading template: failed to put data stream: could not put data stream: 400 Bad Request

I have installed the filebeat 8.8.0 and by running the filebeat setup -e the index is created in index template but when trying to create a data view the index is not showing there and while running the filebeat setup command getting the below error

Exiting: error loading template: failed to put data stream: could not put data stream: 400 Bad Request: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"no matching index template found for data stream [staging]"}],"type":"illegal_argument_exception","reason":"no matching index template found for data stream [staging]"},"status":400}. Response body: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"no matching index template found for data stream [staging]"}],"type":"illegal_argument_exception","reason":"no matching index template found for data stream [staging]"},"status":400}

Did you create a template for your data stream?

Please share the template you are using.

I didn't create any template manually, I have installed the filebeat 8.8.0 configure the filebeat.yml to send data elasticsearch and run the filebeat setup -e.
When i was using filebeat 7.17.1 i was following the same steps to create index template and index pattern and that was working fine.
I am using the custom template

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: "http://x.x.x.x.x:9200"
  username: "citsale"
  password: "mFVKkXXNnOuUzgahLpLpp"
  # Protocol - either `http` (default) or `https`.
  ssl.verification_mode: none

  # Authentication credentials - either API key or username/password.
  index: "staging"
setup.template.name: "staging"
setup.template.pattern: "staging-*"

You are using a template named staging, you didn't created it?

Beats 8.X changed some things, they write to Data Streams primary instead of normal indices, and a data stream needs a template.

What do you mean by didn't create it, have i to create manually my index staging first? instead of mention it in my filebeat.yml and run filebeat setup -e ?

and have i to create manually data stream in kbiana first before running the filebeat setup -e command?

BTW i am following this doc to configure filebeat

In your filebeat.yml you have these two lines:

setup.template.name: "staging"
setup.template.pattern: "staging-*"

This will tell filebeat to use a template named staging, this template needs to be created in Elasticsearch before you send your data using Filebeat, since this template does not exist, you are getting the error you shared in the first post.

So you will need to manually create a template named staging according to the documentation.

Not exactly, this documentation does not changed the template name and is using only some default modules, in this case it will use the default index name and filebeat loads the default template for this index when it send data.

In your case you are telling filebeat to use a differente template that does not exist.

Please correct me if i am wrong the command filebeat setup -e didn't create the index template in kibana.
In filebeat 7.17.1 i hadn't create any template manually i just run the above command and the template was created and i was also showing the index pattern. I am little bit confuse between filebeat version 7.17.1 and 8.8.0. I just upgraded it from older to newer.

No, it will not create the template automatically, you need to create it yourself.

filebeat setup -e will load the default template, ingest pipelines, ILM etc, but you are not using the default template, you are using a template named staging, this needs to be created by you.

You can configure filebeat to load your custom template automatically, but either way you will need to create it yourself

How did you do it in Filebeat 7.17? If you set a custom template name, like staging, filebeat would thrown a similar error if the template did not exist, unless you were using the default index name I think.

The main differente is that Filebeat 8 writes to Data Streams, which are a little different from normal indices, and for a Data Stream to work you need to have a template as speficied in the documentation.

What does the command GET _index_template/staging on Kibana Dev Tools returns?

2 Likes

Also, please share your entire filebeat.yml.

If you run this in Kibana Dev tools, it will create a minimal template for normal indices, not data streams, named staging.

PUT /_index_template/staging
{
  "index_patterns" : ["staging-*"],
  "priority" : 1,
  "template": {
    "settings" : {
      "number_of_shards" : 1
    }
  }
}

Actually, you can get filebeat to automatically create and set up a template based on the filebeat base template.

I think I have an example of it here

Thanks buddy, i have follow the document and now the data is showing in my Discovery tab of Kibana. Thanks once again and much appreciated.

1 Like

Here is my filebeat.yml

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Unique ID among all inputs, an ID is required.
  id: my-filestream-id

  # Change to true to enable this input configuration.
  enabled: false

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  # Line filtering happens after the parsers pipeline. If you would like to filter lines
  # before parsers, use include_message parser.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
        #  enabled: false
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================

setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
setup.dashboards.enabled: true
# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
   host: "http://x.x.x.x:8315"
   username: "city"
   password: "LOnUzificagahp"
   ssl.verification_mode: none
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"

  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: "http://x.x.x.x:8314"
  username: "cty"
  password: "LOnUzificagahp"
  ssl.verification_mode: none
  index: "staging-%{[agent.version]}"
setup.template.name: "staging"
setup.template.pattern: "staging-*"
setup.ilm.enabled: true
#setup.dashboards.index: "staging-*"
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
#output.logstash:
  # The Logstash hosts
  #hosts: ["localhost:5044"]

  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_host_metadata:
  - add_fields:
      fields:
        host.ip: "x.x.x.x"
        host.name: "linux-test"
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.