How to set cutom index in filebeat

Hi,
I am trying to configure custom index with logstash in filebeat so that its became easy to identify the servers with their index name here is the my filebeat config file. and logstash config file

###################### Filebeat Configuration Example #########################

# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html

# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.

# ============================== Filebeat inputs ===============================

filebeat.inputs:

# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.

# filestream is an input for collecting log messages from files.
- type: log

  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/log/*.log
      #  fields:
      #    host_ip: 52.151.196.131
      #  fields_under_root: true
    #- c:\programdata\elasticsearch\logs\*

  # Exclude lines. A list of regular expressions to match. It drops the lines that are
  # matching any regular expression from the list.
  #exclude_lines: ['^DBG']

  # Include lines. A list of regular expressions to match. It exports the lines that are
  # matching any regular expression from the list.
  #include_lines: ['^ERR', '^WARN']

  # Exclude files. A list of regular expressions to match. Filebeat drops the files that
  # are matching any regular expression from the list. By default, no files are dropped.
  #prospector.scanner.exclude_files: ['.gz$']

  # Optional additional fields. These fields can be freely picked
  # to add additional information to the crawled log files for filtering
  #fields:
  #  level: debug
  #  review: 1

# ============================== Filebeat modules ==============================

filebeat.config.modules:
  # Glob pattern for configuration loading
  path: ${path.config}/modules.d/*.yml

  # Set to true to enable config reloading
  reload.enabled: false

  # Period on which files under path should be checked for changes
  #reload.period: 10s

# ======================= Elasticsearch template setting =======================
setup.template.settings:
  index.number_of_shards: 1
  #index.codec: best_compression
  #_source.enabled: false


# ================================== General ===================================

# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:

# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]

# Optional fields that you can specify to add additional information to the
# output.
#fields:
#  env: staging

# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
dashboards.enabled: false

# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:

# =================================== Kibana ===================================

# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
#setup.kibana:<F3>
  # Kibana Host
  # Scheme and port can be left out and will be set to the default (http and 5601)
  # In case you specify and additional path, the scheme is required: http://localhost:5601/path
  # IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
  #host: "localhost:5601"
setup.kibana:
   host: "http://x.x7.x.x:5601"
   username: "elastic"
   password: "nmDqdEvGJHpyCgv3CjWs"
   ssl.verification_mode: none
  # Kibana Space ID
  # ID of the Kibana Space into which the dashboards should be loaded. By default,
  # the Default Space will be used.
  #space.id:

# =============================== Elastic Cloud ================================

# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).

# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can findthe `cloud.id` in the Elastic Cloud web UI.
#cloud.id:

# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:

# ================================== Outputs ===================================

# Configure what output to use when sending the data collected by the beat.

# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
  # Array of hosts to connect to.
  #  hosts: ["http://52.247.226.222:9200"]

  #index: "azureindex-%{[agent.version]}"
  # Protocol - either `http` (default) or `https`.
  #protocol: "https"

  # Authentication credentials - either API key or username/password.
  #api_key: "id:api_key"
  #username: "elastic"
  #password: "changeme"

# ------------------------------ Logstash Output -------------------------------
output.logstash:
  # The Logstash hosts
   hosts: ["52.247.226.222:5044"]
   index: "azureindex-%{[agent.version]}"
setup.template.name: "azureindex"
setup.template.pattern: "azureindex-%{[agent.version]}"
setup.template.fields: "/etc/filebeat/fields.yml"
setup.template.overwrite: false
   #setup.ilm.enabled: true   #Set ilm to False
   #setup.template.name: "k20sdev"         #Create Custom Template
#setup.template.pattern: "k20sdev-"    #Create Custom Template pattern
#index.aliases: "k20sdev-"
#setup.template.enabled: false
#setup.template.overwrite: true
#setup.template.fields: "/etc/filebeat/fields.yml"
  # Optional SSL. By default is off.
  # List of root certificates for HTTPS server verifications
  #ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]

  # Certificate for SSL client authentication
  #ssl.certificate: "/etc/pki/client/cert.pem"

  # Client Certificate Key
  #ssl.key: "/etc/pki/client/cert.key"

# ================================= Processors =================================
processors:
  - add_fields:
      fields:
        host.ip: "x.x.x.x"
        #- add_host_metadata:
          #  netinfo.enabled: true
          # host:
                #  ip: x.x.16x.x
          #when.not.contains.tags: forwarded
      #netinfo.enabled: true
      #     host:
      #  host.ip: 52.151.196.131
              #  - add_cloud_metadata: ~
              #  - add_docker_metadata: ~
              #  - add_kubernetes_metadata: ~

# ================================== Logging ===================================

# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
#logging.level: debug

# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publisher", "service".
#logging.selectors: ["*"]

# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster.  This requires xpack monitoring to be enabled in Elasticsearch.  The
# reporting is disabled by default.

# Set to true to enable the monitoring reporter.
#monitoring.enabled: false

# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:

# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:

# ============================== Instrumentation ===============================

# Instrumentation support for the filebeat.
#instrumentation:
    # Set to true to enable instrumentation of filebeat.
    #enabled: false

    # Environment in which filebeat is running on (eg: staging, production, etc.)
    #environment: ""

    # APM Server hosts to report instrumentation results to.
    #hosts:
    #  - http://localhost:8200

    # API Key for the APM Server(s).
    # If api_key is set then secret_token will be ignored.
    #api_key:

    # Secret token for the APM Server(s).
    #secret_token:


# ================================= Migration ==================================

# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true

filter {
 if [event][module] == 'mysql' {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601} %{GREEDYDATA:messages} %{QUOTEDSTRING:user.name}%{NOTSPACE}%{IP:source.address}" }
    }
 }
 else if [event][dataset] == 'apache.error' {
    grok {
      match => { "message" => "%{TIME} %{GREEDYDATA} %{NOTSPACE:service.status}" }
    }
 }
 else if [event][module] == 'apache' {
    grok {
      match => { "message" => "%{IP:source.address} %{USER:ident} %{USER:auth} \[%{HTTPDATE:apache_timestamp}\] \"%{WORD:method} /%{NOTSPACE:request_page} HTTP/%{NUMBER:http_version}\" %{NUMBER:http.response.status_code} %{NUMBER:http.response.body.bytes}" }
    }
 }
}
output {
  elasticsearch {
    hosts => ["http://x.x.x.x:9200"]
    user => elastic
    password => nmDqdEvGJHpyCgv3CjWs
    index => "%{[@metadata][beat]}-%{[@metadata][version]}"
  }
}

Hi @huzaifa224

from what you described your architecture is ;

filebeat -> logstash -> elasticsearch

so in filebeat configure only the hosts in logstash ouput like below

and in kibana setup , you don't need this

in logstash config file you need to add input part and in Elasticsearch.ouput use the index you want in example "azureindex-%{[agent.version]}"


input {
  beats {
    port => 5044
  }
}
filter {
 if [event][module] == 'mysql' {
    grok {
      match => { "message" => "%{TIMESTAMP_ISO8601} %{GREEDYDATA:messages} %{QUOTEDSTRING:user.name}%{NOTSPACE}%{IP:source.address}" }
    }
 }
 else if [event][dataset] == 'apache.error' {
    grok {
      match => { "message" => "%{TIME} %{GREEDYDATA} %{NOTSPACE:service.status}" }
    }
 }
 else if [event][module] == 'apache' {
    grok {
      match => { "message" => "%{IP:source.address} %{USER:ident} %{USER:auth} \[%{HTTPDATE:apache_timestamp}\] \"%{WORD:method} /%{NOTSPACE:request_page} HTTP/%{NUMBER:http_version}\" %{NUMBER:http.response.status_code} %{NUMBER:http.response.body.bytes}" }
    }
 }
}
output {
  elasticsearch {
    hosts => ["http://x.x.x.x:9200"]
    user => elastic
    password => nmDqdEvGJHpyCgv3CjWs
    index => "azureindex-%{[agent.version]}"
  }
}

and in Elasticsearch you have to;

create the ILM policy for the index
create the index template for the index
create the index 
create the index alias

Hi @ibra
If i set the custom index name in logstash and in Elasticsearch then it will get the same index name if i run multiple filebeats with different index names. What i want is run multiple filebeat beats with different index names so that it helps me to identify the servers.

Hi @huzaifa224

you want an index for each server?

in my humble opinion, you can create a logstash config file for each filebeat input and specifying the host on which logstash have to listen.
each server has it own pipeline.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.