Logstash Multi pipeline with beats input

Hi All,

I have one ELK stack and multiple clients to send their data to it. Each client should have their own Index and clients should not be able to see each others data. I am using filebeats on the client servers. the config is pretty simple , in the filebeat input (on the client side) I use
fileds:
source: "client1server"


in Logstash pipelines.yml:

  • pipeline.id: beats
    config.string: |
    input {
    beats {
    port => 5044
    ssl => true
    ssl_certificate_authorities =>["/etc/logstash/config/certs/elasticsearch-ca.pem"]
    ssl_key => '/etc/logstash/config/certs/logstash-pkcs8.key'
    ssl_certificate => '/etc/logstash/config/certs/logstash.crt'
    }
    }
    output {
    if [source] == 'client1server' {
    pipeline { send_to => client1 }
    } else if [source] == 'client2server' {
    pipeline { send_to => client2 }
    }
    }
  • pipeline.id: client1
    path.config: "/etc/logstash/conf.d/client1.conf"
  • pipeline.id: client2
    path.config: "/etc/logstash/conf.d/client2.conf"

and the clientx.conf looks like this:
input {
pipeline {
address => client1
}
}
output {
elasticsearch {
hosts => ["https://x:9200","https://x:9200","x:9200"]
index => "client1-%{+YYYY.MM.dd}"
cacert => "/etc/logstash/config/certs/elasticsearch-ca.pem"
ssl_certificate_verification => true
user => 'user'
password => "${elasticsearch.password}"
}
}

I manage to bring logstash fine (although I have to bring it up from cli for some reason if I just do systemctl start logstash it will be up but not listening to 5044).
After it is listening to 5044 I start the filebeat on my client side it does create 2 indices (one for client1 and one for client2) but I see the data from both servers in both indices!!!! As if the condition in the output of piplines.yml is being ignored ! I see the fields.source with both server names in both data views in Kibana!
what am I missing? why the condition is not working properly? I'd appreciate any help.

I never used pipeline.yml like that.
this is how I use it.

pipeline.yml is simple two liner, config file and name of pipeline id.

then in config file

input {
   beats {
        port => 5044
        ssl => true
        ssl_certificate_authorities =>["/etc/logstash/config/certs/elasticsearch-ca.pem"]
        ssl_key => '/etc/logstash/config/certs/logstash-pkcs8.key'
        ssl_certificate => '/etc/logstash/config/certs/logstash.crt'
     }
}

in filter section I do something like this

filter {
  if source is from client1 
           mutate { add_field => { "[@metadata][target_index]" => "client1-indexname"  } }
  if source is from client2
           mutate { add_field => { "[@metadata][target_index]" => "client2-indexname"  } }
}

## output {
elasticsearch {
   hosts => ["[https://x:9200](https://x:9200/)","[https://x:9200](https://x:9200/)","x:9200"]
   index => "%{[@metadata][target_index]}"
   cacert => "/etc/logstash/config/certs/elasticsearch-ca.pem"
   ssl_certificate_verification => true
   user => 'user'
   password => "${elasticsearch.password}"
  }
}

and logstash pipeline will run as one. takes all the data from all client. but sends them to proper index

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.