Logstash Conditional Indexes

I am having great difficulty parsing my Filebeat input into separate Indices dependant on some custom fields.

My IIS server has filebeat installed and the filebeat.yml looks like this:

filebeat.inputs:
  enabled: true
  fields: {log_type: iis}

My beats output on my logstash server looks like this:

output {
  if "iis" in [fields][log_type] {
    elasticsearch {
      hosts => elasticsearch
      index => "logstash-beats-iis-%{+YYYY.MM.dd}"      
      template_name => "logstash-beats"
      template => "/beats-template.json"
      template_overwrite => true
    }
  }
  else {
    elasticsearch {
      hosts => elasticsearch
      index => "logstash-beats-%{+YYYY.MM.dd}"
      template_name => "logstash-beats"
      template => "/beats-template.json"
      template_overwrite => true
    }
  }
}

No logs are coming through at all, I'm not sure if the conditional logic or the setup. I would appreciate if anyone can help.

Hi Luke,

There are two things that need to be considered.

I am not sure if this is the correct syntax to add a field. I believe it is:

fields:
  log_type: "iis"

If both the configurations are same, please let me know. I am not quite sure.

Secondly, is there a field called [fields][log_type] in Elasticsearch? I believe it would not be a nested field and should just be reference as log_type.

In that scenario, the logstash condition would be

if [log_type] == "iis" {
.....

Let me know if it works.

Thanks for the reply.

I got this to work but only from a property on the log, I used the pre-defined

if [fileset][module] == "iis" {
....

The filebeat.yml was reading the config okay with that syntax. I noticed there wasn't any [fields][log_type] in elastic search so therefore my filebeat.yml config was not working.

I suppose my question is what is the best approach to tag or label filebeat inputs from within the filebeat.yml? What is the best practice?

Perhaps,

fields:
  log_type: "iis"

Would work, I will have to test.

Thanks

I have tried to stay away from if so I solved a similar situations like this

Filebeat

---
- type: log
  paths:
    - /var/log/foo.log
  encoding: plain
  fields:
    log_prefix: bar
    log_idx: logs
  fields_under_root: false

In Logstash filter section

# Adding @metadata needed for index sharding to Filebeat logs
mutate {
  copy => {
   "[fields][log_prefix]" => "[@metadata][log_prefix]"
   "[fields][log_idx]" => "[@metadata][index]"
  }
}

Logstash output

output {

  elasticsearch {
        hosts => ["10.1.1.1:9200", "10.1.1.2:9201"]
        index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"
  }

}

The logs form /var/log/foo.log would go into an index called bar-logs-2019.02.27

Both the approaches are perfectly fine. I don't think there is any such best practice associated with this.

However, using the if statement does add one check. What we have done is added a field called index name in our inputs. We use the index name field to decide where it gets indexed.

    elasticsearch {
        action => "index"
        hosts => ["https://xyz:9200"]
        index => "operations-%{indice}"
    }

Something similar to what @A_B has mentioned.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.