How can i create 2 indexs for to 2 filebeat inputs?

dhnjgdjghghjgh

Hi med,

Your config would be much easier to read if you would use Preformatted text ( the </> symbol in the toolbar)...

I do what you are looking to do like this.

cat /etc/filebeat/conf.d/syslogs.yml
...
- type: log
  paths:
    - /var/log/auth.log
    - /var/log/faillog
    - /var/log/messages
    - /var/log/syslog
  encoding: plain
  fields:
    log_prefix: dc
    log_idx: syslog-beat
  fields_under_root: false
...

So I add a few extra fields per type of log

In my Logstash _filter section I have

filter {
...
    #############
    # Adding @metadata needed for index sharding to Filebeat logs
    mutate {
      copy => {
       "[fields][log_prefix]" => "[@metadata][log_prefix]"
       "[fields][log_idx]" => "[@metadata][index]"
      }
    }
  }
  #############
...

The above is because on some Logstash inputs I already set [@metadata][log_prefix] and [@metadata][index]

Finally on my Logstash output I use

output {

  elasticsearch {
...
        index => "%{[@metadata][log_prefix]}-%{[@metadata][index]}-%{+YYYY.MM.dd}"
  }
}

Adjust to taste :smiley:

Hope that helps.

-AB

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.