How to create multiple indexs with multiple input in logstash

I would like to have some help to configure multiple indexes from multiple entries with logstash.
Is my configuration below correct?

input {
tcp {
port => "5140"
codec => json
type => "syslog"
}
tcp {
port => "5141"
codec => json
type => "syslog"
}
tcp {
port => "5142"
codec => json
type => "syslog"
}
}

filter {
grok {
match => { "message" => "%{SYSLOG5424PRI:syslog_index}-\s*%{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
}
json {
source => "syslog_message"
}
}

output {
stdout { codec => rubydebug }
if port => "5140" {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest1-%{+YYYY.MM.dd}"
action => "index"
}
}
if port => "5141" {
elasticsearch {
hosts => ["https://xxxxx:9200", "https://xxxxx:9200"]
user => "elastic"
password => "xxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest2-%{+YYYY.MM.dd}"
action => "index"
}
}
if port => "5142" {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest3-%{+YYYY.MM.dd}"
action => "index"
}
}
}

I would try changing your if port to if type.

You can also use the @metadata field. Concept is you add an @metadata field to each of your inputs. Use the conditional on your output. Then the field is deleted and not sent to Elastic automatically.

add_field => { "[@metadata][tag]" => "give it a name" }
  if [@metadata][tag] == "give it a name" {
   // do your output for that input
  }
1 Like

In your case you only ever index an event to one index. If you have three output that creates three sets of connections to elasticsearch. If you use a sprintf reference to set the index name then you only need one set.

filter {
    if [port] => "5140" {
        mutate { add_field => { "[@metadata][indexPrefix]" => "jstest1" } }
    } else if [port] => "5141" {
        mutate { add_field => { "[@metadata][indexPrefix]" => "jstest2" } }
    } else if [port] => "5142" {
        mutate { add_field => { "[@metadata][indexPrefix]" => "jstest3" } }
    }
}
output {
    if [@metadata][indexPrefix] {
        elasticsearch {
            hosts => ["https://xxxx:9200", "https://xxxx:9200"]
            user => "elastic"
            password => "xxxxxxxx"
            cacert => "/etc/logstash/certs/ca.crt"
            index => "%{[@metadata][indexPrefix]}-%{+YYYY.MM.dd}"
            action => "index"
        }
    }
}

As I said, that only works because one event only goes to one index. I cannot find it right now but someone recently asked a question where they had multiple elasticsearch outputs and the conditional for each one was if "someTag" in [tags]. That could result in an event going to multiple indexes, so those outputs could not be combined.

1 Like

Hello aaron-nimocks,

How do you find this configuration?

input {
tcp {
port => "5140"
codec => json
tags => [ "client1" ]
}
tcp {
port => "5141"
codec => json
tags => [ "client2" ]
}
tcp {
port => "5142"
codec => json
tags => [ "client3" ]
}
}

filter {
grok {
match => { "message" => "%{SYSLOG5424PRI:syslog_index}-\s*%{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
}
json {
source => "syslog_message"
}
if [port] => "5140" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest1" } }
} else if [port] => "5141" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest2" } }
} else if [port] => "5142" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest3" } }
}
}

output {
stdout { codec => rubydebug }
if "client1" in [tags] {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest1-%{+YYYY.MM.dd}"
action => "index"
}
}
if "client2" in [tags] {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest2-%{+YYYY.MM.dd}"
action => "index"
}
}
if "client3" in [tags] {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "jstest3-%{+YYYY.MM.dd}"
action => "index"
}
}
}

Hello Badger,

How do you find this configuration?

input {
tcp {
port => "5140"
codec => json
tags => [ "client1" ]
}
tcp {
port => "5141"
codec => json
tags => [ "client2" ]
}
tcp {
port => "5142"
codec => json
tags => [ "client3" ]
}
}

filter {
grok {
match => { "message" => "%{SYSLOG5424PRI:syslog_index}-\s*%{SYSLOGHOST:syslog_hostname} %{GREEDYDATA:syslog_message}" }
}
json {
source => "syslog_message"
}
if [port] => "5140" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest1" } }
} else if [port] => "5141" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest2" } }
} else if [port] => "5142" {
mutate { add_field => { "[@metadata][indexPrefix]" => "jstest3" } }
}
}
output {
if [@metadata][indexPrefix] {
elasticsearch {
hosts => ["https://xxxx:9200", "https://xxxx:9200"]
user => "elastic"
password => "xxxxxxxx"
cacert => "/etc/logstash/certs/ca.crt"
index => "%{[@metadata][indexPrefix]}-%{+YYYY.MM.dd}"
action => "index"
}
}
}

It looks reasonable. The question is ... does it do what you want?

1 Like

I will test to see

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.