Multiple input and multiple output

Good morning. I set up a log server with ELK stack on one machine. I need to ship to it logs from 16 other machines, and i'd like to have 16 named indexes. So i tried this (4 machines for now)
input {

  beats {
    type => syslog
    client_inactivity_timeout=> 1200
    port => 5047
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    tags => [ 'Jira' ]
  }

  beats {
    type => syslog
    client_inactivity_timeout=> 1200
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    tags => [ 'polluce' ]
  }

  beats {
    type => syslog
    client_inactivity_timeout=> 1200
    port => 5045
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    tags => [ 'commserve' ]
  }


  syslog {
    type => omelasticsearch
    port => 5001
    codec => json
    tags => [ 'nagios' ]
}
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\]) %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }

  if [type] == "omelasticsearch" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\]) %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}



output {
if 'jira' in [tags] {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => "jira-%{+YYYY.MM.dd}"
  }

  stdout {}

}

if 'polluce' in [tags] {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => "polluce-%{+YYYY.MM.dd}"
  }

  stdout {}

}

if 'commserve' in [tags] {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => "commserve-%{+YYYY.MM.dd}"
  }

  stdout {}

}

if 'nagios' in [tags] {
  elasticsearch {
    hosts => ["127.0.0.1:9200"]
    index => "nagios-%{+YYYY.MM.dd}"
  }

  stdout {}

}
}

Right now i'm only seeing 2 indexes, one from the Nagios machine which ship from rsyslog (i couldn't install filebeats there) and one from the commserve machine which ship from windows version of filebeats.
What i'm i doing wrong? Should i change my work method and set up a multiple pipeline structure? Thank you for your support.

Multiple pipeline would be better, each pipeline is independent.
And it will be easier for troubleshoot

Thank you Izek.
Is there no way to correct the set up i already did? I'm working around the clock with fire breath from The Big Man on my neck...

  1. use netstat to check that is all the port are listening
  2. using monitoring api to see is there any event goes to the output or the event being drop somewhere

Thanks Izek. My configuration is working after all. Sometimes filebeat needs a refresh of logstash-forwarder to correctly do it's job, so i scp'd the crt again and indexes are now showing correctly.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.