Filebeat error when setting up data stream with working index

I am using filebeat version 7.17.22.

At the start, my filebeat works and sends data to Elasticsearch, which can be seen using Kibana. It sends data to the index.

I have been trying to setup a datastream, so I did so by creating the datastream with the default index pattern filebeat created using the below command (learn from here)

PUT _data_stream/filebeat-7.17.22-

Then, I tried to aim my filebeat output to this datastream by putting the output.elasticsearch.index as filebeat-7.17.22-.

However, I got the error:

Exiting: error loading template: failed to load template: couldn't load template: 400 Bad Request: {"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"composable template [filebeat-7.17.22] with index patterns [filebeat-7.17.22-*], priority [150] and no data stream configuration would cause data streams [filebeat-7.17.22-] to no longer match a data stream template"}]

I tried searching it up online but couldn't really find anything that solved the issue.

My filebeat.yml files:

filebeat.inputs:
- type: filestream
  id: my-filestream-id
  enabled: true
  paths:
    - /path/to/logs/*.txt

filebeat.config.modules:
  path: ${path.config}/modules.d/*.yml
  reload.enabled: false

setup.template.settings:
  index.number_of_shards: 1

setup.dashboards.enabled: false

setup.kibana:
  host: "http://localhost:5601"
  username: "USERNAMEHERE"
  password: "PASSWORDHERE"

output.elasticsearch:
  hosts: ["https://localhost:9200/"]
  preset: balanced
  protocol: "https"
  username: "USERNAMEHERE"
  password: "PASSWORDHERE"
  pipeline: "filebeat-pipeline"
  index: "filebeat-%{[agent.version]}-"

  ssl:
    enabled: true
    certificate_authorities: /path/to/ca_cert

setup.template.enabled: true
setup.template.name: "filebeat-%{[agent.version]}"
setup.template.pattern: "filebeat-%{[agent.version]}-*"

processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded

logging.level: debug

setup.ilm.enabled: false
setup.ilm.check_exists: true

Some links that I have been looking at to troubleshoot the issue: url1, url2, url3

I also tried to migrate the index to a data stream according to Migrate to data stream API, but just got the error:

"type": "illegal_argument_exception",
        "reason": "no matching index template found for data stream [filebeat-7.17.22]"

when running the command:

POST /_data_stream/_migrate/filebeat-7.17.22

I think your error might be due to the setup.template parameters.
%{[agent.version]} is automatically added to the name and pattern, so you are applying it twice, which no longer matches the output.

Thank you for your reply! According to the documentation you gave me:

The Filebeat version is always appended to the given name, so the final name is filebeat-%{[agent.version]}.

So, I believe that means that I can just get rid of the agent.version part and it will work?

Something like this?

setup.template.name: "filebeat"
setup.template.pattern: "filebeat-*"

I will try this and get back to you if it works. Thank you!

Hi @Trevor_Blackford, tried what you suggested and no, it doesn't seem like filebeat automatically applies the %{[agent.version]} when you just have the default template name and pattern set.

I decided to make a new index template called filebeat-7.17.22-test with the same index pattern filebeat-7.17.22-*. I made a data stream called filebeat-7.17.22-test using the command:
PUT _data_stream/filebeat-7.17.22-test.

Then, in my filebeat.yml file, I configured it as such:

output.elasticsearch:
  index: "filebeat-%{[agent.version]}-test"

setup.template.enabled: true
setup.template.name: "filebeat-%{[agent.version]}-test"
setup.template.pattern: "filebeat-%{[agent.version]}-*"

This did not work and I got the same error as before. I decided to try your advice again, but accidentally made a typo, naming the setup.template.pattern as "filebea-*", as seen below

output.elasticsearch:
  index: "filebeat-%{[agent.version]}-test"

setup.template.enabled: true
setup.template.name: "filebeat-%{[agent.version]}-test"
setup.template.pattern: "filebea-*"

And somehow, this worked? The index template loaded and the data stream filebeat-7.17.22-test is taking the filebeat output. However, when I checked my data stream on Kibana, my data stream was not using the index template I had specified, but the default one filebeat creates, which is filebeat-7.17.22, seen below

When I checked my index templates, filebeat-7.17.22-test did not even have the Data stream portion ticked, even though I had ticked it before?

I tried changing the setup.template.name to filebeat-7.17.22, but that gave me the same error again.

I was wondering if you had any insight into how this is presumably working (Logs are being outputted with correct processing)?

Sorry to mislead you with the defaults. I can see that's the default setting that supplied with the default config.

The problem here indeed has to do with data streams, which must have certain mappings in their index template in order to work.

@Tom_N Here is an easy way to get started

Clone the The Existing 7.17.22 Indext Template and rename it and set the data stream. It needs to not match the existing index pattern OR you will need to remove the default... that is all I changed and I save it (you should actually go in later and clean up the aliase etc.

Create the data stream

PUT _data_stream/filebeat-datastream-7.17.3

My Entire working filebeat


filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: filestream

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
    #- c:\programdata\elasticsearch\logs\*
# ======================= Elasticsearch template setting =======================
setup.ilm.enabled: false
setup.template.enabled: false

setup.kibana:

output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
  index: filebeat-datastream-%{[agent.version]}

Then

./filebeat -e

There you go...

No worries, thank you for your help nonetheless!

Hi @stephenb, this worked, thank you!
I want to ask though why does the filebeat.reference.yml file say

#In case you modify this pattern you must update setup.template.name and setup.template.pattern accordingly.
  output.elasticsearch.index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}"
#The template name and pattern has to be set in case the Elasticsearch index pattern is modified.
#setup.template.name: "filebeat-%{[agent.version]}"

I originally assumed you had to change the setup.template name and pattern if you specified a specific index/data stream. But does it just mean that if you are using your own template for the specified index, then update accordingly?

Also, does migrating to a data stream change how Filebeat processors or ingest pipelines work? Upon closer inspection, there is 1 missing field which is user_id (_source.user_id). Instead, the value now appears under the field user_id.#text. The user_id field is created by processing the xml in Filebeat with decode_xml and then using ingest pipeline processors to filter it. Is there a way I can extract 'user_id.#text` under user_id instead?
image

I suspect this issue is caused by migrating to a data stream and is an error with elasticsearch side, and might be an issue with my ingest pipeline. I am sure my Grok processor works, the rename processor might not be working though

Below is my filebeat.yml config for parsers

parsers:
    - multiline:
        type: pattern
        pattern: '([0-9]+(\.[0-9]+)+)\s([0-9]+(:[0-9]+)+)'
        negate: true
        match: after
  processors:
    - dissect:
        tokenizer: "%{header}\n\n%{xmlmsg}"
        field: "message"
        target_prefix: ""
        trim_values: "left"
        trim_chars: " \t"
    - script:
        lang: javascript
        source: >
          function process(event) {
            var xmlmsg = event.Get("xmlmsg");
            event.Put("xmlmsg", xmlmsg.trim());
          }
    - decode_xml:
        field: xmlmsg
        target_field: xml_data
        overwrite_keys: true

These all related to if you are using indices... and the various ways they can be setup.

No. But there are additional ways they can be specific in the index templates etc... Please ask a separate question if you are interested in this.

You will need to rename it using a filebeat rename processor or in an ingest pipeline. IMPORTANT it also depends if user_id is just a single fields or and object are there other user_id.other_field

And while we are at it the proper user field name per ECS would be user.id

If you have additional questions, please open a specific topic.

1 Like

Thank you for your reply and help Stephen!