Multiple conf files handling issue in logstash


(Dazith Kj) #1

Hi All,

I have the below setup on my logstash config.

File name : custom.conf

input {
    beats {
        port => "5044"
        ssl => true
        ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
        ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
    }
}

filter
{

grok {
    match => [
      "message",
      "(?<timestamp>\[[0-9]{4}.[0-9]{2}.[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}.[0-9]{3}\]) \s*\-\s*%{LOGLEVEL:log-level}\s*\-\s* (?<ip>[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3}.[0-9]{1,3})\s*\,\s*(?<corid>[[A-Z][a-z][0-9]]{11}|[[0-9][a-z]]{8}-[[0-9][a-z]]{4}-[[0-9][a-z]]{4}-[[0-9][a-z]]{4}-[[0-9][a-z]]{12})\s*\,\s*(?<interface>[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}|[A-Z]{3}[0-9]{4}-[A-Z]{2}_[A-Z]{4}|[A-Z]{3}[0-9]{4}[a-z]{1}-[A-Z]{2}_[A-Z]{4}|[A-Z]{3}[0-9]{4}-[A-Z]{2}|[A-Z]{3}[0-9]{4}[a-z]{1}-[A-Z]{2}|[A-Z]{3}[0-9]{4}[a-z]{1}|[a-z]{9})\s*\,\s*(?<sequence>[a-z]{2}_[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}_[a-z]*_[a-z]*|[a-z]{2}_[A-Z]{3}[0-9]{4}_[a-z]*_[a-z]*|[a-z]{2}_[A-Z]{3}[0-9]{4}_[[a-z][A-Z]]*|[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}-[0-9]{1}|[a-z]{2}_[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}-[0-9]{1}|[a-z]{2}\_[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}_[[A-Z][a-z]]*|[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1}|[a-z]{2}_[a-z]{2}_[A-Z]{3}[0-9]{4}[a-z]{1})\s*\,\s*(?<log_point>[0-9]{4})\s*\,\s*%{GREEDYDATA:message_context}"
    ]
  }

}



output {
  elasticsearch {
    hosts => ["192.168.200.42:9200"]
    sniffing => true
    manage_template => false
    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
  }
}

This is working as expected. I have added a another config file called stackhealth.conf and added the below content for that/

FIle name : stackhealth.conf

input {
  beats {
    port => 5044
    ssl => true
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["192.168.200.42:9200"]
    sniffing => true
    manage_template => false
    index => "syshealth-index"
    document_type => "%{[@metadata][type]}"
  }
}

After I added bother files to the /etc/logstash/conf.d location still it only processes the custom.conf. How to make both works ?


#2

Multiple pipelines is the answer. https://www.elastic.co/guide/en/logstash/current/multiple-pipelines.html

For your problem, maybe you are not modifying pipelines.yml?


(Dazith Kj) #3

Hi @ventrca.

I am not seeing pipelines.yml in my logstash setup under /etc/logstash/conf.d. Any idea on this ?


#4

Which version are you using? I have it in logstash/config


(Dazith Kj) #5

Below is the log stash version @ventrca

root@localhost:/opt/logstash/bin# ./logstash -V
logstash 2.2.4

I have only two conf files which I have created under /etc/logstash/conf.d

root@localhost:/etc/logstash/conf.d# ls
stackhealth.conf test.conf


#6

Sorry, I can't help you with this. I'm using the 6.6.2


(Dazith Kj) #7

Thanks @ventrca !


(Magnus Bäck) #8

Multiple pipelines is the answer.

The problem is that you can't have two beats input that listen on the same port. Multiple pipelines won't help there.


(Dazith Kj) #9

Thanks @magnusbaeck. Is there any other way to satisfy this requirement ?


(Magnus Bäck) #10

Is there any other way to satisfy this requirement ?

What requirement? Multiple configuration files?


(Dazith Kj) #11

@magnusbaeck what I need is publish specific data which matches two patterns to two different indexes in ES. If the requirement is not clear please let me know. Ill explain more with a simple example.


(Dave Martin) #12

Either have your two sources identify them selves (with a custom field, maybe) and use that to route your documents, or you can have two listeners (on different ports) that add the routing data.

You can set a @metadata field with the name of the index you want a document to be indexed to, then use that field in the output block.


(system) #13

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.