How to save multiple logs to separate ES index's

Hello,

I am trying to send various types of logs through Filebeat -> Logstash -> Elastich Search -> Kibana
I used Fields with a variable log_type and assigned different value to the variable basing on the type of the log and sending it to Logstash. In the output section of the logstash which is sending the data to ES, i am unable to use those fields set in Filebeat to create a index with the name.

Can someone show a sample of how this can be done please.

Code:

Filebeat.yml:

#=========================== Filebeat inputs =============================

filebeat.inputs:

  • type: log

    paths:

    • /var/log/httpd/dev-api-access_log
      fields:
      log_type: api_access_log

fields_under_root: true

  • type: log
    paths:
    • /var/www/sites/api/log/debug-*.log
      fields:
      log_type: debug_log

Logstash:

input{
beats{
port => "5044"
}
}

#filter{
grok {
match => ["message", "%{TIMESTAMP_ISO8601:timestamp}"]
}
date {
match => ["timestamp", "ISO8601"]
}
}

output{
elasticsearch {
hosts => ["xxxx"]
index => "%{[@metadata][fields]}-%{[@metadata][log_type]}" -> HOW TO REFER THE FIELDS FROM FILEBEAT TO CREATE SEPARATE INDEX?
}
}

Here's one of our sanatized logstash output sections:

output {
  if ([fields][app_id] == "acdt") {
    elasticsearch {
      hosts => [{{ ES_http }}]
      cacert => "/etc/logstash/certs/https_interm.cer"
      user => "{{ elastic.user }}"
      password => "{{elastic.pass }}"
      sniffing => false
      manage_template => false
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[fields][app_id]}-%{[fields][campus]}"
    }
  }
  else if "use_ingest" in [tags] and [fileset][module] {
    elasticsearch {
      hosts => [{{ ES_http }}]
      cacert => "/etc/logstash/certs/https_interm.cer"
      user => "{{ elastic.user }}"
      password => "{{elastic.pass }}"
      sniffing => false
      manage_template => false
      pipeline => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[fileset][module]}-%{[fileset][name]}-pipeline"
      ilm_enabled => true
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[fields][app_id]}-%{[fields][campus]}"
    }
  }
  else if "use_ingest" in [tags] and [agent][module] {
    elasticsearch {
      hosts => [{{ ES_http }}]
      cacert => "/etc/logstash/certs/https_interm.cer"
      user => "{{ elastic.user }}"
      password => "{{elastic.pass }}"
      sniffing => false
      manage_template => false
      pipeline => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[agent][module]}-%{[fileset][name]}-pipeline"
      ilm_enabled => true
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[fields][app_id]}-%{[fields][campus]}"
    }
  }
  else {
    elasticsearch {
      hosts => [{{ ES_http }}]
      cacert => "/etc/logstash/certs/https_interm.cer"
      user => "{{ elastic.user }}"
      password => "{{elastic.pass }}"
      sniffing => false
      manage_template => false
      ilm_enabled => true
      index => "%{[@metadata][beat]}-%{[@metadata][version]}-%{[fields][app_id]}-%{[fields][campus]}"
    }
  }
}

All but the first of these are writing to an ILM alis, but you have to define the ILM parts including creating the initial empty index manually first.

This is an Ansible template so consider {{ vars }} "sanitized". The logic for [agent][module] and [fileste][module] is to accommodate breaking changes (Thanks Elastic).

Thanks Rugen!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.