Create new indices on each Beats update


I have an ELK cluster, the topology is quite simple:
Linux_host -> Logstash_node -> Elasticsearch_nde
Each Linux_host has filebeat+auditbeat installed and sends data to the Logstash_node.
The Logstash_node receives all the filebeat data and stores the data using the following code:
"index" => "%{[@metadata][beat]}-%{[@metadata][version]}"

In other words, Logstash writes the data to "filebeat-7.17.4" and other hosts use different versions until they are updated.
Note: Logstash has privileges to create indices
The "filebeat-7.17.4" index is actually an alias to "filebeat-7.17.4-000001" so I can use index rollover.

The problem I am having is when an new update comes out for Beats, this can cause Logstash to start writing to an index that does not have an alias.
Consider the following scenario: Beats 7.17.5 comes out, the Linux_host is updated to use the new filebeat version, Logstash writes the data to "filebeat-7.17.4", but Elasticsearch does not have such an index, so Logstash creates "filebeat-7.17.4" index without the "-000001".

To somewhat counter this issue I created on Elasticsearch nodes a simple systemd service that creates an extra filebeat instance called "filebeat-Elasticsearch", which I use to send data from filebeat to Elasticsearch directly (without Logstash). Whenever something can be sent, the new filebeat instance creates an index template "filebeat-7.17.5", then the filebeat instance creates an index based on the following config:

setup.template.enabled: true "filebeat-%{[agent.version]}"
setup.template.pattern: "filebeat-%{[agent.version]}-*"
setup.template.overwrite: false
  index.number_of_shards: 1
  index.number_of_replicas: 1
setup.ilm.enabled: auto
setup.ilm.rollover_alias: "filebeat-%{[agent.version]}"
setup.ilm.policy_name: "server-logger-policy"
setup.ilm.pattern: "000001"

This is also not ideal, I must make sure that the Elasticsearch node is updated first before any other host, which does happen in most cases, but sometimes another host might get updated first. There is also an issue where I need to make sure that something is sent through filebeat for the new proper index to be created, I have a simple .log file that gets a new line, but this is also not ideal, it might not create the index instantly (at least that does happen to me).

What is the proper way to ensure that the right index is always created after Beats updates?
I need to use Logstash to enrich the log data, I can send a little amount of "dummy" data directly to Elasticsearch, but not by all hosts.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.