Issue with Multiple .conf files

Hi All,
My ELK is setup in a single Ubuntu machine.. I am using Logstash as a service to collect different logs and send them to Elasticsearch..Initially ,i have created an auth.conf in logstash conf.d folder to parse Linux auth logs .After restarting Logstash, I could see the index (Auth logs index) getting created in Kibana. Then i was trying for my application logs..so i stopped logstash, removed auth.conf file and added application.conf file in conf.d folder...After restarting logstash , i could see the new index (application logs index) getting created in Kibana..Later i found that Logstash processes all the .conf files in the conf.d folder..So i stopped logstash,deleted the previous 2 indices and removed the .sincedb files so that it starts from the beginning of file..I placed both the auth.conf and application.conf files in conf.d folder and restarted logstash, but i don't see any index getting created in Kibana..

Please assist me in resolving the issue, also let me know if you need more information.

Welcome to our community! :smiley:
Can you please edit your post and remove the formatting, it makes it very hard to read and help you.

Thanks for that, can you share your configs?

This is my auth.conf

input {
file {
path => "/opt/logstash/auth.log"
type => "syslog"
start_position => "beginning"
}
}

filter{
grok{
match => { "message" => "%{SYSLOGTIMESTAMP:system.auth.timestamp} %{SYSLOGHOST:system.auth.hostname} sshd(?:[%{POSINT:system.auth.pid}])?: %{GREEDYDATA:system.auth.ssh.event} %{GREEDYDATA:system.auth.ssh.method} from %{IPORHOST:system.auth.ip} port %{NUMBER:system.auth.port}:%{GREEDYDATA:typo}" }
}
mutate{
convert => { "bytes" => "integer" }
}
geoip {
source => "system.auth.ip"
}
if "_grokparsefailure" in [tags] {
drop { }
}
}

output {
elasticsearch {
hosts => "localhost:9200"
index => "logstash-auth"
}
}

This is my application.conf

input {
file {
path => "/opt/logstash/services.log4j2.log"
start_position => "beginning"
}
}

filter{
grok{
match => { "message" => "(?[(.?)]) %{TIMESTAMP_ISO8601:TIME} (?[(.?)]) %{WORD:Methodname} %{GREEDYDATA:Messagebody}"}
}
}

output {
elasticsearch {
hosts => "localhost:9200"
index => "application"
}
}

Please format your code/logs/config using the </> button, or markdown style back ticks. It helps to make things easy to read which helps us help you :slight_smile:

The thing to take into account is that Logstash will merge both config files into one when it starts up. You will want to use conditionals to match each input with its own filter and output.

Thanks Mark for the followup, i tried using if conditional in both the .conf files as below, and used the /usr/share/logstash/bin/logstash -f /path/to/syslog.conf -f /path/to/application.conf ,but i see only the application.conf index getting created.

This is my application.conf

input {
  file {
    path => "/opt/logstash/services.log4j2.log"
    tags => [ "applicationdata" ]
    start_position => "beginning"
  }
}

filter{
  if "applicationdata" in [tags]{
     grok{
       match => { "message" => "(?<INFO>\[(.*?)\]) %{TIMESTAMP_ISO8601:TIME} (?<Classname>\[(.*?)\]) %{WORD:Methodname} %{GREEDYDATA:Messagebody}" }
     }
  }
}

output {
  if "applicationdata" in [tags]{
     elasticsearch {
       hosts => "localhost:9200"
       index => "application"
     }
  }
}

This is my syslog.conf

input {
 file {
   path => "/opt/logstash/syslog.log"
   tags => [ "syslogdata" ]
   start_position => "beginning"
 }
}

filter{
  if "syslogdata" in [tags]{
     grok{
       match => { "message" => "%{SYSLOGTIMESTAMP:systemtimestamp} %{SYSLOGHOST:systemhostname} %{WORD:Methodname}(?:\[%{POSINT:systempid}\])?: %{GREEDYDATA:servicename}: %{GREEDYDATA:Message}" }
       match => { "message" => "%{SYSLOGTIMESTAMP:systemtimestamp} %{SYSLOGHOST:systemhostname} %{WORD:Methodname}(?:\[%{POSINT:systempid}\])?: %{GREEDYDATA:Message}" }
     }
     if "_grokparsefailure" in [tags] {
          drop { }
     }
  }
}

output {
  if "syslogdata" in [tags]{
     elasticsearch {
       hosts => "localhost:9200"
       index => "slog${DATE}${TIME}"
     }
  }
}

Found an alternate to this though...created pipeline for each conf file in the pipelines.yml and after restarting logstash service ,was able to see all new indices getting created in kibana.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.