How to create a new filebeat index with a custom name?


What is the best way to create a new index for filebeat and output it to elastic elasticsearch/kibana?

The filebeat logs will still be parsed through logstash.

So far, I have enabled this on the filebeat client:

#    # Optional index name. The default is "filebeat" and generates
#    # [filebeat-]YYYY.MM.DD keys.
    index: "appstash-dev-%{+YYYY.MM.dd}"
#    # A template is used to set the mapping in Elasticsearch
#    # By default template loading is disabled and no template is loaded.
#    # These settings can be adjusted to load your own template or overwrite existing ones
#      # Template name. By default the template name is filebeat.
      name: "appstash"
#      # Path to template file
      path: "appstash.template.json"

And I have uploaded the new template on the ELK server after modifying the last line to :

  "template": "appstash-*"

curl -XPUT 'http://localhost:9200/_template/appstash?pretty' -d@appstash-index-template.json


Figured out that you only need to make changes in the output section of the configuration file:

output {
      stdout { }
      elasticsearch {
        hosts => ["localhost:9200"]
        sniffing => true
        manage_template => false
        index => "appstash-%{environment}-%{+YYYY.MM.dd}"
        document_type => "appstash"

If you use the LS output configuration from the documentation, then when you customize the output.logstash.index value in your Filebeat config it should work as expected (it causes the [@metadata][beat] value to change to the configured index value). This also allows the document_type config option to work correctly from the Filebeat prospector config.


output {
  elasticsearch {
    hosts => "localhost:9200"
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
    document_type => "%{[@metadata][type]}"

Perfect, thank you!

1 Like

This topic was automatically closed after 21 days. New replies are no longer allowed.